Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I suspect that the majority of the people who claim that these tools are making them more productive are simply skipping these tasks altogether

I think that's probably true, but I think there are multiple layers here.

There's what's commonly called vibe coding, where you don't even look at the code.

Then there's what I'd call augmented coding, where you generate a good chunk of the code, but still refactor and generally try to understand it.

And then there's understanding every line of it. For this, in particular, I don't believe LLMs speed things up. You can get the LLM to _explain_ every line to you, but what I mean is to look at documentation and specs to build your understanding and test out fine grained changes to confirm it. This is something you naturally do while writing code, and unless you type comically slow, I'm not convinced it's not faster this way around. There's a very tight feedback loop when you are writing and testing code atomically. In my experience, this prevents an unreasonable amount of emergencies and makes debugging orders of magnitude faster.

I'd say the bulk of my work is either in the second or the third bucket, depending on whether it's production code, the risks involved etc.

These categories have existed before LLMs. Maybe the first two are cheaper now, but I've seen a lot of code bases that fall into them - copy pasting from examples and SO. That is, ultimately, what LLMs speed up. And I think it's OK for some software to fall into these categories. Maybe we'll see too much fall into them for a while. I think eventually, the incredibly long feedback cycles of business decisions will bite and correct this. If our industry really flies off the handle, we tend to have a nice software crisis and sort it out.

I'm optimistic that, whatever we land on eventually, generative AI will have reasonable applications in software development. I personally already see some.






There is also the situation in which the developer knows the tools by heart and has ownership of the codebase, hence intuitively knows exactly what has to be changed and only needs to take action.

These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.

Personally, I feel like everything besides actually vibe coding + maybe sanity checking via a quick glance is a bad LLM application at this point in time.

Youre just inviting tech dept if you actually expect this code to be manually adjusted at a later phase. Normally, code tells a story. You should be able to understand the thought process of the developer while reading it - and if you can't, there is an issue. This pattern doesn't hold up for generated code, even if it works. If an issue pops up later, you'll just be scratching your head what this was meant to do.

And just to be clear: I don't think vibe coding is ready for current enterprise environments either - though I strongly suspect it's going to decimate our industry once tooling and development practices for this have been pioneered. The current models are already insanely good at coding if provided the correct context and prompt.

E.g. countless docs on each method defining use cases, force the LLM to backtrack through the code paths before changes to automatically determine regressions etc. Current vibe coding is basically like the original definition of a hacker: a person creating furniture with an Axe. It basically works, kinda.


> These devs don't get any value whatsoever from LLM, because explaining it to the LLM takes longer then doing it themselves.

I feel like people are maybe underestimating the value of LLMs for some tasks. There's a lot of stuff where, I know how to do it but I can't remember the parameter order or the exact method name and the LLM absolutely knows. And I really get nothing out of trying to remember/look up the exact way to do something. Even when I do know, it often doesn't hurt to be like "can you give me a loop to replace all the occurrences of foo with bar in this array of strings" and I don't need to remember if it's string.replace(foo,bar), whether I need to use double or single quotes, if it's actually sub or gsub or whatever.

There's lots of tiny sub-problems that are totally inconsequential and an LLM can do for me, and I don't think I lose anything here. In fact maybe I take a little longer, I chat with the LLM about idioms a bit and my code ends up more idiomatic/more maintainable.

It kind of calls to mind something Steve Jobs said about how hotkeys are actually worse than using a mouse, and that keyboard users aren't faster, they just think they are. But using LLMs for these sorts of things feels similar in that, like using keyboard shortcuts, maybe it takes longer, but I can use muscle memory so I don't have to break flow, and I can focus on something else.

Asking the LLM for these sorts of trivial problems means I don't have to break flow, I can stay focused on the high-level problem.


> There's a lot of stuff where, I know how to do it but I can't remember the parameter order or the exact method name and the LLM absolutely knows. And I really get nothing out of trying to remember/look up the exact way to do something. Even when I do know, it often doesn't hurt to be like "can you give me a loop to replace all the occurrences of foo with bar in this array of strings" and I don't need to remember if it's string.replace(foo,bar), whether I need to use double or single quotes, if it's actually sub or gsub or whatever.

I mean, I kinda get it in more complicated contexts, but the particular examples you describe (not remembering method names and/or parameter orderings) have been solved for ages by any decent IDE.


If you are writing code to solve a one off task the first category is ok.

What boggles my mind is people are writing code that’s the foundation of products like that.

Maybe it’s imposter syndrome though to think it wasn’t already being done before the rise of LLMs


Developers have always loved the new and shiny. Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.

LLM “vibe coding” is another continuation of this “new hotness”, and while the more seasoned developers may have learned to avoid it, that’s not the majority view.

CEOs and C-suites have always been disconnected from the first order effects of their cost-cutting edicts, and vibe coding is no different in that regard. They see the ten dollars an hour they spend on LLMs as a bargain if they can hire a $30 an hour junior programmer instead of a $150 an hour senior programmer.

They will continue to pursue cost-cutting, and the advent of vibe coding matches exactly what they care about: software produced for a fraction of the cost.

Our problem — or the problem of the professionals - is that we have not been successful in translating the inherent problems with the CEOs approach to a change in how the C-suite operates. We have not successfully pursuaded them that higher quality software = more sales, or lower liability, or lower cost maintenance, and that partially because we as an industry have eschewed those for “move fast and break things”. Vibe coding is “Move Fast and Break Things” writ large.


> Heck, getting developers not to rewrite an application in their new favorite framework is a tough sell.

This depends a lot on the "programming culture" from which the respective developers come. For example, in the department where I work (in some conservative industry) it would rather be a tough sell to use a new, shiny framework because the existing ("boring") technologies that we use are a good fit for the work that needs to be done and the knowledge that exists in the team.

I rather have a feeling that in particular the culture around web development (both client- and server-side parts) is very prone to this phenomenon.


In my personal experience, web development teams don't really have much to do, so they create work for themselves.

I agree.

In the Venn diagram of the programming culture of the companies that embrace vibe coding and the companies whose developers like to rewrite applications when a new framework comes out is almost a perfect circle, however.


In my experience, it was. And if we're getting real for a moment, the vast majority of programmers gets paid by a company that is, first and foremost, interested in making more money. IMHO all technical decisions are business decisions in disguise.

Can the business afford to ship something that fails for 5% of their users? Can they afford to find out before they ship it or only after? What risks do they want to take? All business decisions. In my CTO jobs and fractional CTO work, I always focused on exposing these to the CEO. Never a "no", always a "here's what I think our options and their risks and consequences are".

If sound business decisions lead to vibe coding, then there's nothing wrong with it. It's not wrong to loose a bet where you understood the odds.

And don't worry about businesses that make uniformed bets. They can get lucky, but by and large, they will not survive against those making better informed bets. Law of averages. Just takes a while.


I agree with your sentiment, but not with the conclusion.

Sure, technical decisions ultimately depend on a cost-benefit analysis, but the companies who follow this mentality will cut corners at every opportunity, build poor quality products, and defraud their customers. The unfortunate reality is that in the startup culture "move fast and break things" is the accepted motto. Companies can be quickly started on empty promises to attract investors, they can coast for months or years on hype and broken products, and when the company fails, they can rebrand or pivot, and do it all over again.

So making uninformed bets can still be profitable. This law of averages you mention just doesn't matter. There will always be those looking to turn a quick buck, and those who are in it for the long haul, and actually care about their product and customers. LLMs are more appealing to the former group. It's up to each software developer to choose the companies they wish to support and be associated with.


Tech and product are just small components in what makes the business profitable. And often not as central as we in our industry might _like_ to believe. From my perspective, building software is the easy, the fun part. Many bets made have nothing to do with the software.

And yes, there is enshittification, there is immoral actors. The market doesn't solve these problems, if anything, it causes them.

What can solve them? I have only two ideas:

1. Regulation. To a large degree this stops some of the worst behaviour of companies, but the reality in most countries I can think of is that it's too slow, and too corrupt (not necessarily by accepting bribes, also by wanting to be "an AI hub" or stuff like that) to be truly effective.

2. Professional ethics. This appears to work reasonably well in medicine and some other fields, but I have little hope our field is going to make strides here any time soon. People who have professional ethics either learn to turn it off selectively, or burn out. If you're a shady company, as long as you have money, you will find competent developers. If you're not a shady company, you're playing with a handicap.

It's not all so black and white for sure, so I agree with you that there's _some_ power in choosing who to work for. They'll always find talent if they pay enough, but no need to make it all too easy for them.


To play devils advocate for a second, the law of averages states nobody should ever found a startup. Or any business for that matter.

It’s rare that startups gain traction because they have the highest quality product and not because they have the best ability to package, position, and market it while scaling all other things needed to mane a company.

They might get acqui-hired for that reason, but rarely do they stand the test of time. And when they do, it almost always because founders stepped aside and let suits run all or most of the show.


> Maybe it’s imposter syndrome though to think it wasn’t already being done before the rise of LLMs

It may well have been happening before the rise of LLMs, but the volume was a lot more manageable

Now it's an unrestricted firehose of crap that there just not enough good devs to wrangle


Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s. Or the impact of trash version of commodified products flooding Amazon.

The volume here is orders of magnitude greater, but that’s the closest example I can think of.


> Would be interesting to look at the real world impact of the rise of outsourcing coding to the cheapest lowest skilled overseas body shop en mass, around the 2000s.

Tech exec here. It is all about gamed metrics. If the board-observed metric is mean salary per tech employee, you'll get masses of people hired in india. In our case, we hire thousands in India. Only about 20% are productive, but % productive isnt the metric, so no one cares. You throw bodies at the problem and hope someone solves it. Its great for generations of overseas workers, many of whom may not have had a job otherwise. You probably have dozens of Soham Parekhs .

Western execs also like this because it inflates headcount, which is usually what exec comp is based on "i run a team of 150.." Their lieutenants also like it because they can say "i run a team of 30", as do their sub-lieutenants "i run a team of 6"


I'm a fractional RevOps consultant for a company for about 20 hours a week. They spend more for those 20 hours than they would if they filled the position full time, but they'd rather it this way because it shows up on a different line item and goes with their narrative of slashing headcount. Expect we'll see a lot more of this, particularly as everyone races to become the next "single-person unicorn startup."



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: