Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose the counterargument is, how many experienced programmers today have seen a register or a JMP instruction being used?


Quite a lot of the good programmers I have worked with may never have needed to write assembly, but are also not at all confused or daunted by it. They are curious about their abstractions, and have a strong grasp of what is going on beneath the curtain even if they don't have to lift it all that often.

Most of the people I work with, however, just understand the framework they are writing and display very little understanding or even curiosity as to what is going on beneath the first layer of abstraction. Typically this leaves them high and dry when debugging errors.

Anecdotally I see a lot more people with a shallow expertise believing the AI hype.


The difference is that the abstraction provided by compilers is much more robust. Not perfect: sometimes programmers legitimately need to drop into assembly to do various things. But those instances have been rare for decades and to a first approximation do not exist for the vast majority of enterprise code.

If AI gets to that level we will indeed have a sea change. But I think the current models, at least as far as I've seen, leave open to question whether they'll ever get there or not.


It's pretty common for CS programs to include at least one course with assembly programming. I did a whole class programming controllers in MIPS.


I would assume at least the ones that did a formal CS degree would know JMP exists.


Your compiler does not hallucinate registers or JMP instructions


Doesn't it? Many compilers offer all sorts of novel optimizations for operations that end up producing the same result with entirely different runtime characteristics than the source code would imply. Going further, turn on gcc fast math and your code with no undefined behavior suddenly has undefined behavior.

I'm not much of a user of LLMs for generating code myself, but this particular analogy isn't a great fit. The one redeeming quality is that compiler output is deterministic or at least repeatable, whereas LLMs have some randomness thrown in intentionally.

With that said, both can give you unexpected behavior, just in different ways.


> With that said, both can give you unexpected behavior, just in different ways.

Unexpected as in "I didn't know" is different than unexpected as in "I can't predict". GCC optimizations is in the former camp and if you care to know, you just need to do a deep dive in your CPU architecture and the gcc docs and codebase. LLMs is a true shot in the dark with a high chance miss and a slightly lower chance of friendly fire.


And every single developer is supposed to memorize GCC's optimizations so they never make a change that will optimize wrong?

Nah, treating undefined behavior as predictable is a fool's errand. It's also a shot in the dark.


What is that about memorization? I just need to know where the information is so I can refer to it later when I need it (and possibly archive them if they're that important).


If you're not trying to memorize the entirety of GCC's behavior (and keeping up with its updates), then you need to check if your UB is still doing what you expect every single time you change your function. Or other functions near it. Or anything that gets linked to it.

It's effectively impossible to rely on. Checking at the time of coding, or occasionally spot checking, still leaves you at massive risk of bugs or security flaws. It falls under "I can't predict".


In C, strings are just basic arrays which themselves are just pointers. There’s no safeguards there like we have in Java, so we need to write the guardrails ourselves, because failure to do so result in errors. If you didn’t know about it, a buffer overflow may be unexpected, but you don’t need to go and memorize the entire gcc codebase to know. Just knowing the semantics is enough.

The same thing happens with optimization. They usually warns about the gotchas, and it’s easy to check if the errors will bother you or not. You dont have to do an exhaustive list of errors when the classes are neatly defined.


This comment seems to mostly be describing avoiding undefined behavior. You can learn the rules to do that, though it's very hard to completely avoid UB mistakes.

But I'm talking about code that has undefined behavior. If there is any at all, you can't reliably learn what optimizations will happen or not, what will break your code or not. And you can't check for incorrect optimization in any meaningful way, because the result can change at any point in the future for the tiniest of reasons. You can try to avoid this situation, but again it's very hard to write code that has exactly zero undefined behavior.

When you talked about doing "a deep dive in your CPU architecture and the gcc docs and codebase", that is only necessary if you do have undefined behavior and you're trying to figure out what actually happens. But it's a waste of effort here. If you don't have UB, you don't need to do that. If you do have UB it's not enough, not nearly enough. It's useful for debugging but it won't predict whether your code is safe into the future.

To put it another way, if we're looking at optimizations listing gotchas, when there's UB it's like half the optimizations in the entire compiler are annotated with "this could break badly if there's UB". You can't predict it.


I suppose you are talking about UB? I don't think that is anything like Halucination. It's just tradeoffs being made (speed vs specified instructions) with more ambiguity (UB) than one might want. fast math is basically the same idea. You should probably never turn on fast math unless you are willing to trade speed for accuracy and accept a bunch of new UB that your libraries may never have been designed for. It's not like the compiler is making up new instructions that the hardware doesn't support or claiming the behavior of an instruction is different than documented. If it ever did anything like that, it would be a bug, and would be fixed.


> or claiming the behavior of an instruction is different than documented

When talking about undefined behavior, the only documentation is a shrug emoticon. If you want a working program, arbitrary undocumented behavior is just as bad as incorrectly documented behavior.


UB is not undocumented. It is documented to not be defined. In fact any given hardware reacts deterministically in the majority of UB cases, but compilers are allowed to assume UB was not possible for the purposes of optimization.


The trigger for UB is documented, the result is not documented.

And despite the triggers being documented, they're very very hard to avoid completely.


I bet they did at one point in time, then they stopped doing that, but still not bug free.


lol are you serious? I bet compilers are less deterministic now than before what with all the CPUs and their speculative executions and who knows what else. But all that stuff is still documented and not made out of thin air randomly…


Agree. We'll get a new breed of programmer — not shitty ones — just different. And I am quite sure, at some point in their career, they'll drop down to some lower level and try to do things manually.... Or step through the code and figure out a clever way to tighten it up....

Or if I'm wrong about the last bit, maybe it never was important.


Counter-counterargument; You don't need to understand metalworking to use a hammer or nails, that's a different trade, though an important trade that someone else does need to understand in order for you to do your job.

If all of mankind lost all understanding of registers overnight, it'd still affect modern programming (eventually)


Anyone that's gotten a CS degree or looked at godbolt output.


Not really a counter-argument.

The abstraction over assembly language is solid; compilers very rarely (if at all) fail to translate high level code into the correct assembly code.

LLMs are nowhere near the level where you can have almost 100% assurance that they do what you want and expect, even with a lot of hand-holding. They are not even a leaky abstraction; they are an "abstraction" with gaping holes.


Registers: All the time for embedded. JMP instruction? No idea what that is!


Probably more than you might think.

As a teen I used to play around with Core Wars, and my high school taught 8086 assembly. I think I got a decent grasp of it, enough to implement quicksort in 8086 while sitting through a very boring class, and test it in the simulator later.

I mean, probably few people ever need to use it for something serious, but that doesn't mean they don't understand it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: