Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Working on a Programming Language in the Age of LLMs (ryelang.org)
43 points by todsacerdoti 20 hours ago | hide | past | favorite | 21 comments




If everyone is using LLMs to write new code, and LLMs are trained on existing code from the internet, that creates an enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code. It is a self-reinforcing cycle.

I've experienced this to some degree already in using LLMs to write Zig code (ironically, for my own pet programming language). Because Zig is still evolving so quickly, often the code the LLM produces is wrong because it's based on examples targeting incompatible prior versions of the language. Alternatively, if you ask an LLM to try to write code for a more esoteric language (e.g., Faust), the results are generally pretty terrible.


Fine-tuning existing base models on your programming language is pretty practical. [1] You might need a very good and large dataset but that's hardly a problem for a programming language you're generating because you better have the ability generate programs for fuzzing your compiler anyway.

[1] There are a lot of models that achieve this. E.g. Goedel-Prover-V2-32B [2] is a model based off of Qwen3-32B and fine tuned on Lean proofs. It works extremely well. I personally tried further fine tuning this model on Agda and although my dataset was pretty sloppy and small, it was pretty successful. If you actually sit down and generate a large dataset with variety it's pretty reachable to fine tune it for any similar prog lang.

[2] https://huggingface.co/Goedel-LM/Goedel-Prover-V2-32B


> enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code

Let’s see.

I’ve vibe-coded some apps with TypeScript and react, not knowing react at all, because I thought it’s the most exemplified framework online.

But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

My point is that just because there’s more training data, the quietly is not necessarily excellent; I ended up with a mixture of conflicting idioms seasoned react developers would have frowned upon.

Picking a less exemplified language and supplementing with more of your knowledge of the language might yield better results. E.g. while the AI can’t write better Rust on its own, I don’t mind contributing with Rust code myself more often.


> But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

One of the many pitfalls with using an llm to write code. It's very easy to find yourself with a codebase you know nothing about that you can't progress any further because it keeps breaking.


It was an interesting experiment working with very little clue of the generated code.

I could learn about react and understand the large-scale incongruences / mismatching choices the LLM made for me.

But I already have one reactive framework in my wetware that I can have an educated opinion on.


What if we require LLM to write anything in Brainf**? If the language design is small enough to insert into our message every time, maybe it can work well.

Let's not underestimate LLM's ability to do in-context learning. Perhaps it can just read the new lang's docs and apply what it already knows from other languages

But didn't LLMs read all the math books and can't really do arithmetics (they need special modes / hacks / python to do it I think)?

So why would they be able to "read" the docs and use that knowledge except up to pattern matching level. That's why I also assume, that tons of examples with results would do better than lang docs, but I haven't tested it yet.


The LLMs can easily be trained to teach people how to use new programming languages.

But will people have interest in learning a new language? :)

There are interesting ideas out there in the landscape of PLD and LLMs, mostly centred around query languages.

https://lmql.ai/

https://github.com/paralleldrive/sudolang-llm-support

https://ben.terhech.de/posts/2025-01-31-llms-vs-programming-... — take-away: output languages like Python and TypeScript fare better, as I’d expect.

Maybe the blog post implies: why make a language the LLMs have zero examples of, and thus can’t synthesize?

I’d still make a language for the heck of it, because programming as a recreational human activity is great.


It'd be interesting to look at some of the stuff Alan Kay talked about. With STEPS he was working on some interesting notions that might actually help here. The entire work they were doing with that was effectively based around creatign DSLs for whatever problem area they were working around at the time; GUI; hell, iirc they implemented TCP by writing a DSL taht was able to read the ASCII images and tables in the TCP RFC and use that to implement packages.

I googled Alan Kay Steps and got to what seems an very interesting PDF.

I will read it, just to be certain by "might actually help here" - what is "here"? You mean with Rye language design generally, LLM-s relating to new languages or something else? :)


I was specifically thinking about integration with LLMs. I feel like if we're able to really get the small problem domain related DSL stuff right we can divide the difficulty of a problem into multiple smaller issues. In my experience with LLMs so far, the major issue by far is it keeping enough in a small context that it reliably 'knows' what it needs to. If you can task it to first create a DSL for a problem domain and then express the solution in that DSL it made before, it might really simplify the problem.

In general I feel like there's some great applicability here in this specific language. The language docs imply a certain degree of homoiconicity, which I think would be really helpful for DSLs like this...


@AllegedAlec Rye is fully runtime homoiconic. Rebol had great emphasis on DSL-s (dialects), and Rye has them too (validation, math, ...), but tries to be more conservative with them, because the main Rye DSL should be quite flexible.

Instead of DSL-s Rye focus much more on constructing specialized, also limited and isolated contexts (scopes), that have just the functions you need or just the custom functions you need, while the evaluation mechanism doesn't change (is the one you (or LLM) already know(s)).

I haven't thought about contexts + LLM-s yet. I will read the PDF you referenced with interest! Here is a little more info about contexts: https://ryelang.org/meet_rye/specifics/context/


I saw the context mentioned shortly, but I'll look into it. Sounds interesting!

see also “Notational Intelligence” https://thesephist.com/posts/notation/

“The value of notation lies in how it enables us to work with new abstractions. With more powerful notation, we can work with ideas that would have been too complex or unwieldy without it. Equipped with better notation, we might think of solutions or hypotheses that would have been previously unthinkable. Without Arabic numerals, we don’t have long division. Without chess notation, the best strategies and openings may not have been played. Without a notation for juggling patterns called Siteswap, many new juggling patterns wouldn’t have been invented. I think notation should be judged by its ability to contribute to and represent previously unthinkable, un-expressible thoughts.”

This is pretty much the whole point of programming languages imo


Cool, this explains that idea of code as a tool for thinking in a really good way. Haven't read the whole post yet.

The biggest implications of LLM and programming is this:

LLMs are autoencoders for sequences. If an LLM can write the code, the entropy value of that code is low. We know that already, most human communication is low entropy, but the LLMs being good at it implies there is a more efficient structure we could be using. All the embeddings are artifacts of structure, but the entire ANN model obfuscates structures it encodes.

Clearly there are better programming languages, closer fit to our actual intents, than the existing ones. The LLM will never show them to us, we need to go make/find them ourselves.


Yes, in some sense JavaScript is the pinnacle of programming language design: it's so resilient to chaos that even stochastic parrots can write it with some success.

It's like the absolute minimal threshold of demands for sloppy code to work without immediately falling apart.


Thinking of something like APL or J?



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: