awongh 13 hours ago | next |

One other connection between these two is the fact that for AI assisted coding you can only get the AI to write you code that has broad adoption on the internet. Otherwise the AI doesn’t know the standard libraries and conventions.

I worry overall that this could signal the complete end of new programming languages except in a few special cases. (Which I think was already a worrying trend before AI- see Bret Victor’s “The Future of Programming” talk).

n_ary 13 hours ago | root | parent | next |

I welcome the death of Framework-of-the-month and hip-language-of-the-month plague. The fragmentation is just irritating at this point and demands too much effort and time waste adapting and promoting yearly re-writing of everything because management want to cut cost and hire more cheap desperate devs.

awongh 12 hours ago | root | parent | next |

I was more thinking about the blocked progress of true innovation in the way programming languages are made and used. This is the talk I referenced above: https://www.youtube.com/watch?v=8pTEmbeENF4

I agree with his main thesis which is that we can do better to improve on some of these literally 60 year old programming and programming tooling paradigms.

I'm hoping that the paradigm improvement isn't to just paste over the entire endeavour with some AI agents that abstract away coding as we currently think about it.

zelphirkalt 12 hours ago | root | parent | prev | next |

So are we supposed to stick with the clumsy inelegant languages that have become mainstream forever, or can we hark back to more elegant designs?

I mean, by that logic, languages like Rust and Golang should never have taken off. Yet we see improvement due to their invention and adoption. Limiting programming language design and theory will rob us of a lot of potential improvements we could achieve at some point.

KronisLV 12 hours ago | root | parent | next |

> So are we supposed to stick with the clumsy inelegant languages that have become mainstream forever, ...

Possibly? I mean, for certain kinds of systems (most of the CRUD stuff out there) having some boring technologies and frameworks that are fairly stable and get security updates is probably preferable, even if they're not the most modern or stylish to use. And since the Lindy effect seems to have some truth to it, those might also help with picking something that will probably still be around in 5-10 years.

Think Java with Spring Boot, C# with ASP.NET, Ruby with Rails, Python with Django, PHP with Laravel, Node with Express.js and so on.

> I mean, by that logic, languages like Rust and Golang should never have taken off.

In my country, they mostly haven't. Most of the job postings are for the languages and frameworks above, there's actually very little in the way of modern tech stacks, so it probably depends on where you're located - tech hubs (or idk, research in academia) will allow for shinier technologies vs something more boring and conventional!

n_ary 11 hours ago | root | parent | prev | next |

I see that you are very protective of golang. Golang was designed well from ground-up to be good backed by one of the massive global corporation which has driven tech trends since decades.

Anything of decent value and well design will be adapted and continue to be built. But anything without decent marketing power will die anyways regardless and only get adapted because it felt cool to that random engineer no.1 who wanted to pad there CV quickly before jumping onto next big money machine.

Show me the mass adaptation of Nim/F#/Scala(decently mature)/Crystal/Odin etc and I’ll understand your points.

Go is a different league compared to the ones I mentioned.

zelphirkalt 10 hours ago | root | parent |

Actually I am not a big fan of Golang, even criticized its design decisions in the past, but I know it as one example of "newer" languages, that have some adoption.

datavirtue 5 hours ago | root | parent | prev | next |

You can play around: in dev, at home and in university. At work, boring, drab, proven tech until a pressing need arises where we must push the envelope.

freilanzer 11 hours ago | root | parent | prev | next |

Go is a terrible example for a modern, elegant, and creative language design.

markusw 10 hours ago | root | parent |

I think it's a great example of a modern, elegant, boring & productive language.

But Go wasn't really the point of the article, so maybe we shouldn't be here doing language flamewars? :D

kaba0 12 hours ago | root | parent | prev | next |

I mean, what does go actually bring to the table? That's a modern language only in release date, it is absolutely not worthy of being next to rust in terms of novelty.

Also, old languages can be improved still. Java has some gotchas that will probably forever be fossilized into the language, but it still managed to get algebraic data types, proper pattern matching, virtual threads, etc.

Also, add to it Brook's famous paper and we can see that "new, modern" language will fail to account for even single digit productivity improvements, let alone order of magnitude differences. Me adding an already working, battle-tested dependency is the only real productivity booster.

(Nonetheless, I'm absolutely not against change, and I absolutely love language design as a topic. I just think that any language that wants to live in its own hermetic world has to give a really good reason for that choice)

Animats 10 hours ago | root | parent |

> I mean, what does go actually bring to the table?

Good libraries for everything commonly done server side in web dev. Google people write the libraries, and they're used within Google, so even the obscure cases are executed frequently. If your problem is a variation on a common theme, not only can Go probably do it, common LLMs will know how to make Go do it.

Go gives up a little performance in exchange for simplifying some problems that give many programmers trouble. It's garbage collected, which simplifies ownership, and its goroutines can safely block, so you don't have the async/sync distinction. Since LLMs don't really understand either of those issues, this makes LLM-driven coding work better.

Remember, LLM based coding is about as good as copy-pasting from Stack Overflow. It only works well in areas where there's something it can copy and mod. Usually, if you search for names in LLM-generated code, you can find where it located a prototype to copy from.

kaba0 10 hours ago | root | parent |

Java is more widely used even within Google itself, and every single positive mentioned applies multiple-folds to it. Java has significantly more, battle-hardened libraries, and in the territory of web backends it is definitely by far the biggest player. Any service worth its weight will have a Java connection library made first, and that will often be of the highest quality. Many libraries are depended on by multiple software giants, not just Google.

Java is also GCd with significantly better implementationS, they have far better throughput, and they also have a low-latency one where pause time is actually decoupled from heap size, unlike what Go does with simply blocking threads under load.

Also, Java has virtual threads which are the same as go routines, so not even that is a relevant difference.

And due to it being a bigger language, more of it will be found in LLM data sets, ergo LLMs will more useful in case of Java.

awongh 11 hours ago | root | parent | prev |

One symptom that I see that's super annoying (I'm thinking JS, but many languages have this problem) is that tooling seems to be getting worse. Incompatibility between CJS and ESM, for example. Each new issue like this that comes up seems to get frozen in place for longer and longer.

Nevermind getting a newer language with new useful concepts, can we just have a tooling ecosystem that isn't broken!

hombre_fatal 4 hours ago | root | parent | next |

The JS ecosystem is getting better though. For example, Node moving to natively support ESM from a CJS require. So this doesn't fit in with the idea that AI is making things stagnate.

markusw 10 hours ago | root | parent | prev |

The Javascript tooling ecosystem seems especially broken. XKCD 927 [0] applies there, to build systems, frameworks, and more.

I would suggest trying a different ecosystem entirely. I chose Go's, but there are many alternatives not nearly as broken, and in some cases working quite nicely.

[0]: https://xkcd.com/927/

Aeolun 11 hours ago | root | parent | prev |

To some extend, but the perfect language has not been invented yet, so we still need new ones.

I’ll admit Go and Typescript come close though.

mewpmewp2 13 hours ago | root | parent | prev | next |

I'm wondering if there's going to be an initiative for a new programming language that is specifically intended for AI usage. E.g. AI has some statistical patterns built in that it would naturally guess what the method names should be or the API package names, and then we would just invent a programming language based on what is most natural for AI for any sort of use case. So even if AI "hallucinates" it would likely be the right guess since all the API and syntax is picked by AI in the first place.

andai 12 hours ago | root | parent | next |

I experienced this in my automated GPT-4 programmer project. This was 1.5 ago back when prompt size was very limited. I couldn't put a large programs entire source code in the prompt so I auto-converted it to stubs.

I noticed that letting GPT-4 write more of the code had the advantage that it understood it better and less context was necessary.

Now when I design systems for GPT-4 itself to use, I just let GPT-4 design them. Ask it to use an imaginary system, and what it comes up with (consistently) should be the API.

jobigoud 12 hours ago | root | parent | prev | next |

A related concept is LLM-driven API design. First write the comment of what the API function should do, then let the LLM write the function call, it will pick the most expected naming and argument order, then you write the API using this function prototype.

And maybe if the LLM sometimes tries to use different naming, you could add an overload of the same API end point, as long as it doesn't clash with an existing one. So in the end for anyone using your library with an LLM it would appear to just work.

itsdavesanders 10 hours ago | root | parent | prev | next |

Once AI is fully writing software autonomously and humans don’t need to see that code anymore, then why not just have AI write iteratively more efficient assembly? (Meaning make the software work then turn it over to an optimization agent that can drive out all inefficiencies until perfection.)

Those efficiencies will eventually be driven into all parts of the system. AI will be able to run the most efficient code on the most efficiently designed processors, that can be designed with the knowledge that only AI is going to use them. And then we can remove ALL the weird abstractions and accommodations we had to make for human brains.

I’m not saying it will happen tomorrow but language choice is just a temporary concern.

croes 5 hours ago | root | parent |

Since when is AI code efficient?

It’s based on human code, most of that isn’t efficient.

Cthulhu_ 13 hours ago | root | parent | prev | next |

That sounds like the inverted world though, as in, AI is imperfect and makes mistakes, let's try and work with and around those mistakes instead of making AI better.

mewpmewp2 11 hours ago | root | parent |

I think it makes sense to go at this problem from multiple different angles. One is making the AI better, the second angle is making the World easier for AI. We learn from both approaches. And you can try to make a really good AI that is very good with the current World, but at the same time an AI that is maybe much faster and efficient will still be very valuable if there's a World that is made easier for them. Because there will always be a trade off to an extent with accuracy vs speed.

It could be even something that the larger and slower AI model is defining and implementing the APIs and smaller and faster is the one who will actually make use of those.

Or some variations of those combined, where it's a single AI model with a capability to select between the 2. If it's complicated APIs, it will try to crawl the docs for accuracy and make "AI intuitive" wrappers around all of the APIs and their edge cases, and making it easier for the fast model.

awongh 12 hours ago | root | parent | prev | next |

Doesn't the AI do that pretty well already? Not sure what other language features you would need to improve on the way the AI writes code.

mewpmewp2 11 hours ago | root | parent |

Some LLMs do it well, some do it worse, and are better at certain things, worse at others. I think there's a balance to be found, e.g. you could have a very small and fast model perform better if the coding paradigm was specifically dedicated to the AI.

E.g. right now I think AI is excellent with JavaScript/TypeScript, but worse with some other languages like e.g. Rust or Scala.

Arch-TK 11 hours ago | root | parent | prev | next |

> I worry overall that this could signal the complete end of new programming languages except in a few special cases.

I don't think anything has changed in this regard. Yes we have a small number of success stories like elixir or rust, and then we have a few corporate backed success stories like go, swift and kotlin. But that has always been a tiny fraction of the "promising new programming language" space.

Companies have always been conservative about choice of programming language and I don't think that will get any worse now.

awongh 9 hours ago | root | parent |

As I referenced elsewhere in this thread, I'm specifically thinking about the fact that if tools for AI coding become a standard part of dev workflow, new languages might not ever get off the ground because the AI hasn't seen enough about the new language on the internet and thus doesn't know enough about new languages to be helpful in AI assisted coding- a new kind of negative feedback loop.

Arch-TK 5 hours ago | root | parent |

Yes and my point is that the kinds of people who over-rely on AI and can't seem to get anything done without it are the kinds of people who previously over relied on stack overflow and copy paste.

The kinds of people who would have in the past tried a new and unproven language are still there and they're unlikely to stop trying new and unproven languages just because AI doesn't know them.

I for sure am not going to stop looking at weird new and/or obscure languages just because AI struggles with them.

jffhn 5 hours ago | root | parent | prev | next |

>I worry overall that this could signal the complete end of new programming languages

Programming often involves creating new languages (or APIs) (like domain specific languages) on top of lower level languages (like the base programming language you use).

Future AIs should be able to learn new languages, else they would remain pretty limited.

awongh 5 hours ago | root | parent |

Right now an AI understands a lot about the opinions of all the programmers have about how to write good Python, and enough people wrote about it on the internet that you can ask ChatGPT questions about best practices and it can come up with a coherent answer. My worry would be that a new language would not come with this knowledge.

If Apple were to come out with Swift today and were to fine-tune an LLM or just put the docs / code samples / repos /PRs in context I don't think the LLM would understand as much about the language as it does about other widely talked about languages.

JanisErdmanis 4 hours ago | root | parent | prev | next |

This completely contradicts my experience using LLMs, indispensable in my workflow for generating Julia code, a young language with little adoption outside its HPC niche. Furthermore, I use Julia for cryptographic software development, which is a niche in itself, and hence, there is no broad learning set from which LLMs could autocomplete.

One of my most impressive uses is for performance optimisation. Claude implemented efficient byte conversion to bigint, and I pasted Julia’s bignum standard base library file, `gmp.jl` to figure out how to do so [1]. Then, a Jacobi symbol calculation function was completely optimised with Calude, and I interactively provided feedback on the functions, correctness, and profiler information on which lines most of the time is spent [2].

Additionally, the OpenSSL wrapper for elliptic curve operations would be beyond my expertise if I hadn’t inquired with Claude about interfacing the OpenSSL_jll library for curve addition and multiplication. This led to the generation of 200 lines of Julia code, which I could execute in the REPL to explore the entire problem space [3]. These pieces were essential to get competitive performance for the ShuffleProofs package [4], where benchmark figures were also asked to be made by Claude with `Luxor`, which is instead a low-level library lovely in itself.

I would be unable to accomplish those improvements alone and would be highly reluctant to learn what a bignum limb is. It would be nearly impossible to implement those optimisations in such a short time while retaining the big picture of the ecosystem I maintain. I also used to have good experiences with ChatGPT at the beginning, but it silently became more dogmatic and lazy over time and somehow lost its ability to apply analogies between `C`, `Java` or `python` ecosystems.

To use LLMs effectively, one needs to interact with them appropriately and be aware of the biases one makes in the questions. Knowing when to make a gentle nudge and when to contradict is a vital skill to have there to get good answers.

[1]: https://github.com/PeaceFounder/CryptoGroups.jl/blob/0f6b4e2...

[2]: https://github.com/PeaceFounder/CryptoGroups.jl/blob/0f6b4e2...

[3]: https://github.com/PeaceFounder/OpenSSLGroups.jl

[4]: https://github.com/PeaceFounder/ShuffleProofs.jl

zcw100 9 hours ago | root | parent | prev | next |

What about a possible future with a programming language designed from the beginning with AI in mind? It's fun to think about what that might even look like.

ozgrakkurt 12 hours ago | root | parent | prev |

Yeah sure llms that can’t write quicksort half the time will end the creation of new languages. They might also end home cooking because they’ll cook all the food right?

awongh 11 hours ago | root | parent | next |

To be more specific, new languages will be created, but if people get used to using AI to help them code a new language might not have enough internet discussion around it to enable the AI to write good quality code in that language. It's a self-enforcing negative feedback loop. One that already exists, but could be accelerated when AI assisted coding becomes more and more commonplace.

I do think that an LLM today could understand the basics of a language it's never seen before (as long as it's at least somewhat related to a current language). But the AI's code writing abilities will be limited. For example I can ask ChatGPT to tell me if a code example is Pythonic.

A slightly different extension of your example would be to ask if recipe writing will be ended forever. The AI basically understands the rules / patterns of how food ingredients are combined together with which techniques. Will anyone bother to write down new recipes anymore in the future if the AI can do just as well? If someone publishes a new recipe it needs to spread prolifically enough on the internet before the AI understands it as a fundamental concept. What do we lose if this is the case?

jobigoud 12 hours ago | root | parent | prev |

It's not the LLMs doing it, it's the people using LLM with languages that work well with LLMs vs with new languages. To your point, it can write quicksort in python easily but maybe not in some esoteric language.

shreddit 13 hours ago | prev | next |

So choose boring technology (as in old and tested) and... new technology? He likes to use SQLLite, but also S3? So all technologies. Thanks, i guess?

biorach 13 hours ago | root | parent | next |

S3 is around since 2006 and is the default choice for object storage for many people. It's as boring as it gets.

simonw 12 hours ago | root | parent |

Yeah SQLite and S3 aren't that far apart in age - SQLite first came out in 2000, S3 in 2006.

First release of Go was 2009.

Cthulhu_ 13 hours ago | root | parent | prev | next |

My personal takeaway in a broader context is that Go code is pretty boring, predictable and repetitive as opposed to clever and customised, therefore in theory LLMs can more easily predict and generate said code. It's a tradeoff, Go can be more verbose because it's not as expressive, but using the right tooling, that labor/toil is reduced.

pcwelder 11 hours ago | root | parent | next |

The simplicity of golang definitely helps.

Between golang and rust, chatgpt has much easier time with golang.

The standard toolchain (as opposed to python packaging) makes it easier for LLM to debug and run commands. So it's easier to develop a golang based app over python using poetry or uv.

pyrale 12 hours ago | root | parent | prev | next |

Use a language with lots of boilerplate code so the LLM can generate more code for you?

loa_in_ 12 hours ago | root | parent |

Being explicit without concerns about being terse is something that wasn't practical before assisted coding and that would be a realisation of some of most important tenets of computer science.

pyrale 10 hours ago | root | parent | next |

> something that wasn't practical before assisted coding

It still isn't practical if you read more code than you write, unless you also use a LLM to summarize the code for you.

kaba0 11 hours ago | root | parent | prev |

Terseness is not only about practicality. Even if AI writes it for me, it requires a huge deal more reading comprehension on my part to properly check it. E.g. an if err block every couple lines will actively hinder reading that code, and is an easy source for errors to hide (no, just because you check the value of err doesn't make it proper error handling).

moffkalast 13 hours ago | root | parent | prev |

The older the tech, the more training data should be available for it (assuming it's still being used today), ergo LLMs will be a lot better at it. They ought to be way better at Cpp or JS than Golang in that sense.

WJW 10 hours ago | root | parent | next |

I don't think this is true: the older the tech, the less hype it has and thus the less blog posts/example repos/etc will be available. In particular, a boring language without a strong open source culture (COBOL, say, or Oracle SQL) will probably not have a lot of training data available despite being very much in use.

markusw 12 hours ago | root | parent | prev |

As others have said at this point, S3 hardly counts as new tech anymore.

But even if you disagree, S3 is conceptually super simple: put binary blobs, get binary blobs, delete binary blobs. I trust the blobs to be there when I need them. I don't have to think about storage size at all, maybe not even backups (depending on use case).

I still have to worry about network, but that isn't really that much different from disk access that can fail.

So yeah, I think S3 counts as boring. :)

never_inline 13 hours ago | prev | next |

The larger point is that by keeping non-strategic part of your tech stack boring, you can focus on core innovative technology. In this guy's case it's LLM.

Which is same point that the original post "choose boring technology" makes. I don't see the contradiction.

qbonnard 11 hours ago | prev | next |

Agreed, especially for "serious" project, where LLMs are a good approximation of an average dev that could easily increase your bus factor.

On the other hand, this could be the same slippery slope that starts at "choose a boring project because it has mature tooling" but ends at "IDE's are a language smell". If the language is so boring that you can't focus on it without an LLM doing the menial work, that could be because there is too much menial work to start with.

markusw 12 hours ago | prev | next |

Hey everyone! Author here. Just saw a lot of traffic from HN. I'm happy to answer questions. :-)

tudorizer 11 hours ago | root | parent | next |

Indirectly: choose boring tech because it's well documented, widely supported, and training data for LLMs abound.

Do you run LLMs locally? I'd be keen to have an overview of your stack.

markusw 10 hours ago | root | parent |

I do!

In the article, I mostly mean working with LLMs inside the applications I'm building, as opposed to as a tool as part of development. But I do both.

Right now, I'm trying out Zed, which supports multiple LLMs natively. Just today, I tried Zed + Ollama + Qwen 2.5 Coder 32B running locally, and it worked! Blows my mind that I can have GPT-4o-level assistance running on my laptop. :D

bsenftner 10 hours ago | root | parent | prev |

Nobody gets your points. HN is like this far too often, take minor points and that's all the discussion is about.

I'm doing what you're talking about 100%. I've found that established and popular FOSS web software that has normal people users and their support forums, plural, for their normal users (as any popular software has) - this type of "established and non-sexy probably used for office work software" is fully known and understood by LLMs. It's very simple to put a support AI Agent inside the interface to that software, and because the LLM has both the user support forums and the source code itself in the training, that AI Agent can do things against the software's API and data structures by request - it knows them!

Yeah, "boring software" is no longer boring when it can be integrated with characters that know that software as good as the lead developer who wrote it and have the knowledge of a dozen PhDs too.

markusw 10 hours ago | root | parent |

It's okay, sometimes the discussions turn interesting and I learn something. And a lot of people get to read my article, and I hope their lives are enriched in a small but meaningful way.

I'm glad the approach works for you as well! :D It's fascinating to watch a statistical document completion model be able to do so much.

thih9 12 hours ago | prev | next |

How does it work with mainstream technologies that are being actively developed?

Did anyone try working with LLMs and Swift for example? Is the AI suggesting deprecated libraries from earlier iOS versions / generally having trouble? Or is it working fine?

n_ary 11 hours ago | root | parent |

LLMs are always lagging. Even in terms of Go, I knew someone used a copy-pasta from chatgpt this week on code review because chatgpt did not have most recent go updates, so this PR has a very convoluted mass of functions created, but the entire PR should have been just one method call which is available since go 1.22

I suspect that the productivity gain we see now from LLM will generate adequate tech debt in time if people with less experience(or patience) continue to build things with auto generated code without own research and learning.

(Edit: punctuations and disamguify)

Gooblebrai 11 hours ago | prev | next |

I guess Go is quite fine for his use cases, but I suppose if suddenly he had to delve into more frontend-focused apps or AI development, he would be forced to use Javascript and Python.

markusw 4 hours ago | root | parent |

I build my frontends with HTML using gomponents. The jury’s still out about how far I can get with just Go while incorporating LLMs into my apps, but I’m optimistic!

Havoc 13 hours ago | prev | next |

I’d say it’s more being deliberate about how much new tech you incorporate into new projects. One small and ideally isolated new piece in each project is a good idea else you’ll stagnate

Cthulhu_ 13 hours ago | root | parent |

Meanwhile, I'm in a brand new project where it's layers (nodejs) upon layers (pnpm) upon layers (nx), and that's just the project setup / runtime.

Havoc 11 hours ago | root | parent |

Yeah can’t say I’m a fan of the node eco system in general but obvious unavoidable sometimes

eichi 13 hours ago | prev | next |

And boring areas of enterprise or professional world with boring classical techniques for software engineering while applying LLMs. I see a lot of potentials.

threeseed 13 hours ago | prev | next |

Here we go again.

Choose boring technology aka pick the technologies I think are the best.

devjab 12 hours ago | root | parent | next |

I think boring is mostly a bad terminology for stable in these sort of arguments. Which means you get to actually focus on doing work. This author is apparently a fan of Go as an example of what they probably consider a “boring” technology and from that article it seems they came from Python. I think that speaks volumes about how they work with technologies that don’t change very often, because they are designed to be simple to use. Well maybe not Python if you allow it to go crazy, but if you actually know the difference between looping over a List and using a generator you probably do some very boring Python.

I think it’s similar to why a lot of places use Ruby on Rails, Django and the likes. They work, and they work well. It’s why people use Debian. It’s to lead relatively uneventful lives as far as the underlying tech goes, which is “boring”.

In my experience you’ll find far less attachment to specific technologies in this crowd of people. They like their stable tools but they won’t preach to you about a specific tool, more so about how choosing something that is simple to work with because it just works is nice. We do it in practice, we use C and we didn’t go onboard with Rust. We use a little Zig but only when it’s completely interoperable with C. In 5-10 years when Rust or Zig are more mature (and boring) we may pick them up again. Well I probably won’t be here by then but I’m sure you know what I mean. Until they get boring, however, we’re just going to use C. I don’t think C is better than Rust and I don’t think you should necessarily copy us, but for us C is “boring” and Rust isn’t.

markusw 12 hours ago | root | parent | prev |

Party pooper. ;)

neolefty 12 hours ago | root | parent |

Your secret is out. Hopefully it's everybody's secret, and we just have different preferences. But then "use technologies that you enjoy, and add LLMs" is a less grabby headline.

markusw 12 hours ago | root | parent |

My point isn’t about these particular technologies. Pick any you like. But maybe not the flashy new ones, but the ones that make you the most productive and don’t change a lot. And then add the fancy new LLMs.

sna1l 13 hours ago | prev | next |

is this satire?

markusw 12 hours ago | root | parent | prev | next |

In what way are you in doubt whether or not it's satire?

neolefty 12 hours ago | root | parent | prev |

Thank you for writing it! LLMs have an "old" feeling to me because they're based on human language. I totally felt your blog post.

markusw 12 hours ago | root | parent |

I think they feel like pure science fiction magic. I have multiple alien brains installed locally on my laptop.