Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Typically debugging, e.g., a tricky race condition in an unfamiliar code base would require adding logging, refactoring library calls, inspecting existing logs, and even rewriting parts of your program to be more modular or understandable. This is part of the theory-building.

When you have an AI that says "here is the race condition and here is the code change to make to fix it", that might be "faster" in the immediate sense, but it means you aren't understanding the program better or making it easier for anyone else to understand. There is also the question of whether this process is sustainable: does an AI-edited program eventually fall so far outside what is "normal" for a program that the AI becomes unable to model correct responses?





This is always my thought whenever I hear the "AI let me build a feature in a codebase I didn't know in a language I didn't know" (which is often, there is at one in these comments). Great, but what have you learned? This is fine for small contributions, I guess, but I don't hear a lot of stories of long-term maintenance. Unpopular opinion, though, I know.

I guess it's a question of how anyone learns. There's some value in typing code, I suppose, but with tab complete that's been gone for a long time. Letting AI write something and then reading it seems as good as copying and pasting from some other source.

I'm not super qualified to answer as I haven't gone deep into AI at all. But from my limited observations I'd say yes and no. You generally aren't copy/pasting entire features, just snippets that you yourself have to string together in a sensible way. Of course there are lots of people who still do this and what's why I find most people in this industry infuriating to work with. It's all good when it's boilerplate, and that's actually my primary use of "AI"—it's essentially been a snippets replacement (and is quite good at that).

For the same reasons as you said I don't understand its use in our side projects. Maybe I'm alone in this, but I feel like the entire point of a side project is to learn some new tech, framework, etc. If you just let an LLM do the work for you, you don't actually learn anything about the underlying tech, so what was the point of the whole thing? I think LLMs abuse our dopamine system that usually activates when we get our side projects working and allows us to feel that with very little effort. Of course at the cost of not learning anything other than how to prompt an LLM.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: