Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We're all CTO now (ideasasylum.com)
83 points by fside 1 day ago | hide | past | favorite | 102 comments





If you're a solo developer building your next Salesforce killer, you will feel that dopamine rush every time AI helps you get closer to launch.

Don't worry, there's still more coding problems after that one is solved by AI, because the "last 10%" is another 90% of work when it comes to polishing.


I'm a DevOps engineer. I'm training someone new to the field.

Often I'll ask the AI to do something and it goes sideways. Sometimes it really saves me time, but many times not. I'll break down and actually type out commands or even Google what to do instead of using the AI because it's still faster.

It's true that my trainee uses the AI more because there's fewer commands in his muscle memory. But, it's still not great yet.

Further, the AI must have each one of its actions approved. I've tried fully automatic mode. It's bad.

AI is more like a lawn mower. It's self-propelling, but you still have to hold on to it, sometimes you got to turn it off and just pull the stuff out of its way or it gets stuck.


Are you talking about CLI commands? I’ve been at programming for over a decade and don’t have the type of memory that allows me to remember all the obscure non-intuitive commands - AI has been a lifesaver, I could care a-less about nerd points.

No one does! That’s why everyone uses man and write scripts. And also bookmark the wiki pages.

warp is especially helpful with this

Or turn it over and fix a broken belt.

Or scrape out all the dead grass that has piled up inside.

That goddamn flap on the back that gets stuck folded under the thing every time you back up.

Author assumes we’re going to use AI more and more. I don’t agree. I regularly out perform the AI pushers on my team and I can talk about engineering in person too!

To me, the more interesting question is whether you without AI can outperform you using AI, not whether you can outperform someone else who is using AI.

I think AI has already gotten to a point to where it can help skilled devs be more productive.


I have tested this. I have been coding for close to 20 years, in anything from web to embedded.

I got tired of hearing about vibe coding day in and day out, so I gave in, and I tried everything under the sun.

For the first few days, I started to see the hype. I was much faster at coding, I thought, I can just type this small prompt to the LLM and it will do what I wanted, much faster than I could have. It worked! I didn't bother looking at the code throughly, I was more vigilant the first day, and then less so. The code looks fine, so just click "accept".

At first I was amazed by how fast I was going, truly, but then I realized that I didn't know my own code. I didn't know what's happening when and why. I could churn lots of code quickly, but after the first prompt, the code was worth less than toilet paper.

I became unable to understand what I was doing, and as I read through the code very little of it made any sense to me, sure the individual lines were readable, functions made some semblance of sense, but there was no logic.

Code was just splattered throughout without any sense of where to put anything, global state was used religiously, and slowly it became impossible for me to understand my code. If there was a bug I didn't know where to look, I had to approach this as I would when joining a new company, except that in all my years, even looking through the worst human code I have ever seen, it was not as bad as the AI code.


> I became unable to understand what I was doing, and as I read through the code very little of it made any sense to me, sure the individual lines were readable, functions made some semblance of sense, but there was no logic

Yes, this happens, and it’s similar to when you first start working on a codebase you didn’t write

However, if instead of giving up, you keep going, eventually you do start understanding what the code does

You should also refactor the code regularly, and when you do, you get a better picture of where things are and how they interact with each other


No. It is different, when working with a codebase written by humans there is always sanity. Even when looking at terrible codebases, there is some realm of reason, that once you understand can make navigating the code easy.

I believe you missed the part of my comment saying that I have been coding professionally for 20 years. I have seen horrible codebases, and I'm telling you I'd rather see the switch statement with 2000 cases (real story), many of which were 100s of lines long, with C macros used religiously (same codebase). At a bare minimum, once you get over the humps, with human written code, you will always find some realm of reason. A human thought to do this, they had some realm of logic. I couldn't find that with AI, I just found cargo cult programming + shoving things where they make no sense.


Have you requested code written in the most human-readable fashion, with human-readable commenting,

for you to choose to be minified later?


I respectfully disagree. I have about the same years of experience as you, and now also 1-2 years of AI-assisted coding

If you stay on top of the code you are getting from the AI, you end up molding it and understanding it

The AI can only spit out so much nonsense until the code just doesn’t work. This varies by codebase complexity

Usually when starting from scratch, you can get pretty far with barely even looking at the code, but with bigger repos you’ll have to actively be involved in the process and applying your own logic to what the AI is doing

If the code of what you are building doesn’t make sense, it’s essentially because you let it get there. And at the end of the day, it’s your responsibility as the developer to make it make sense. You are ultimately accountable for the delivery of that code. AI is not magic, it’s just a tool


It sounds like parent commentator is giving it tasks and trying to make it come up with logic. It’s bad at this. Use it as a translator, i knock out the interface or write some logic in pseudocode and then get it to translate it to code, review it, generate tests and bam half an hour or more of coding has been done in a few minutes. all the logic is mine, but i don’t have to remember if that function takes &foo or foo, or the right magic ioreader i need or whatever…

whenever i try to get it to do my work for me, it ends badly.

it can be my syntax gimp tho sure.


this is a good approach that I have admittedly not thought of.

At that point however, is it really saving you that much time over good snippets and quick macros in your editor?

For me writing the code is the easiest part of my job, I can write fast, and I have my vim configured in such a way where it makes writing code even faster.


Clearly this affected you deeply enough to create a new HN crusade account. Genuinely curious questions: Did you share this experience with management and your peers? What was their response? What steps could realistically reverse the momentum of vibe coding?

> Did you share this experience with management and your peers? What was their response?

Among my peers, there seemed to be a correlation between programming experience and agreement with my personal experience. I showed a particular colleague a part of the code and he genuinely asked me which one of the new hires wrote this.

As for management, well, let's just say that it's going to be an uphill battle :)

> What steps could realistically reverse the momentum of vibe coding?

For me and my team it's simple, we just won't be using those tools, and will keep being very strict with new hires to do the same.

For the software industry as a whole, I don't know. I think it's at least partially a lost cause. Just as the discussions about performance, and in general caring about our craft are.


I wish you luck, but at this point I think it is going the same path as banning cell phones in workplace/car/we.That is, even with penalties, people will do it.

Sentiment on AI-generated code being so bad that human code is strongly preferred, and marketed.

I love the implicit and totally baseless questioning of motives in this reply.

AI writes code according to the instructions given. You can instruct it to architect and organize the code any way you want. You got the fruit of your inarticulation.

This is what I heard from many people online.

I have tried beign specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.


Need to use the Wittgenstein approach: “What can be shown cannot be said.” LLMs need you to show it what you want. Its model has seen nothing. Built up a library of examples. Humans also work better this way. Just good practice.

I think that's why cursor works well for me. I can say, write this thing that does this in a similar way to other places in the codebase and give it files for context.

You are experiencing the Dunning-Kruger effect of using AI. You used it enough to think you understand it, but not enough to really know how to use it well. That's okay, since even if you try and ignore and avoid it for now, eventually you'll have enough experience to understand how to use it well. Like any tool, the better you understand it and the better you understand the problems you're trying to solve, the better job you will do. Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.

> Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.

I'm finding the inverse correlation: programmers who are bullish on AI are actually just bad programmers. AI use is revealing their own lack of skill and taste.


You can literally stub out exactly the structure you want, describe exactly the algorithms you want, the coding style you want, etc, and get exactly what you asked for with modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.

> modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.

I'm not chasing the AI train to be on the bleeding edge because I have better things to do with my time

Also I'm trying to build novel things, not replicate well represented libraries and algorithms

So... Maybe I'm just holding it wrong, or maybe it's not good at doing things that you can't copy and paste from github or stackoverflow


It always feels like the height of HN when some pseudo-genius attempts a snobbish reply but instead just confidently confirms their lack of literacy on the subject.

LLMs write code according to thousands of hidden system prompts, weights, training data, and a multitude of other factors.

Additionally, they’re devoid of understanding.

I truly hope you do better in the future.


I have had great success at guiding LLMs to produce my desired output. Your baseless snark is a sign of your incompetence.

Or that your use case is well-documented across the internet and simple.

That would be more inline with how the technology works and less a sign of your inherent genius though.


> AI writes code according to the instructions given

Nope. The factors are your prompt, the thousand of lines of system prompts, whatever bugs may exist inside the generator system, and the weights born from the examples in the data which can be truly atrocious.

The user is only partially in control (a minimal part of that). With standard programming workflow, the output are deterministic so you can reason out the system's behavior and truly be in control.


Useless review without discussing the context, and carefully considered, well-scoped prompt you gave(?) it.

That makes sense. It’s also step 1 in your journey.

Maybe taking the AI code as input and refactoring it heavily will result in a better step 2 than your previous step 0 was.


it makes me feel like an absolute beginner, and not in a good way. It was such a terrible experience that I don't believe I will try this again for at least a year. If this is what programming will become in the future, then feel free to call me a luddite, because I will only write hand crafted code.

Have you tried to add more guidelines ? Similar to documentation you would provide to new members of the team

copy paste from a different comment in this thread:

> I have tried being specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.


> I think AI has already gotten to a point to where it can help skilled devs be more productive.

Not really. The nice thing about knowing when to do something is you can just turn off your brain when typing code of thinks about architecture in the meanwhile. Then you just run the linter for syntax mistakes and you're golden. Zero to no mental load.

And if you've been in the same job for years, you have mental landmarks all over the codebase, the internal documentation, and the documentation of the dependencies. Your brains runs much faster than your finger, so it's faster to figure out where the bug is and write the few lines of code that fixes it (or the single character replacement). The rest of the time is thinking about where similar issues may be and if you've not impacted something down the line (aka caring about quality).


I cannot talk about engineering in person because I do not know how to pronounce words, and I am on the spectrum. :/ I can write about it though!

also the fatigue of just "feeding the beast" comes on fairly quickly.

Maybe that’s because the AI pushers are compensating for already not being as good.

What happens when other yous start using ai. I suspect they will obv outperform you just in sheer typing speed.


I don’t agree. There’s a “muscle” you train every time you think about problems and solve them, and I say muscle because it also atrophies.

But is the muscle the part where we copy and paste from stack overflow, and now ChatGPT, or is the muscle when there's a problem and things aren't working and you have to read the code and have a deep think? Mashing together random bits of code isn't a useful muscle, debugging problems is. If it's the LLM which mashes a pile of code together for me and I only jump in when there's a problem, isn't that an argument for LLM usage, not against?

> There is a popular argument that a software developer’s job is not write software but to solve a user’s problem. Bullshit

Wait, what?

> I was never particularly interested in the code itself

> Instead, I was always more interested in the product

Confusing contradictions aside, I had trouble engaging with this article.

The author seems to think every developer thinks like they do. Some people actually enjoy helping their business/users.

The author also has trouble imagining other perspectives as a people manager. From the linked article,

> I do not get any sort of high from managing people. I don’t think anyone gets that same high from this role

Hate to break it to the author again, but some people actually enjoy seeing those they mentor/manage succeed.

Being a people manager isn’t the right fit for everyone. Perhaps being a developer in the next 20, 5, or 1 year won’t be the right fit for the same people it is for today.


While I agree with most of what the author says, this article does exude "Suno CEO says people don't enjoy making music" energy.

It's like when image generators came out and people looks surprised that some people actually enjoy spending hours with a pencil to draw something and have not immediately come in mass to push the "generate" button.

Well, AI absolutely appeals to the type of bulkshitter which hates coding and wants to get away from it.

> bulkshitter

With all due respect, this perspective baffles me. Some see it your way, others see so much opportunity.


I'm sorry to hear that and I don't mind if they choke on such opportunity.

This was my reaction exactly. I personally get my endorphins as a manager from seeing products get traction. The author clearly thinks differently from me and it seems like they don't believe devs like me exist.

> Some people actually enjoy helping their business/users.

Beyond that - doing coding without solving problems or enabling anyone/anything is just doing art for art's sake. It may have a place, but its more personal, a hobby, an expression than anything tangible to be used in the real world - leaving aside business.


> Confusing contradictions aside...

Product is not the same as code. We code to build a product, sure, but I think the author means they are interested in designing the product to solve users problems (a.k.a UX)


CTOs at most of my companies were more like tech evangelists

It depends a lot on the specific business and company stage. E.g. CTOs at boutiques basically meet clients and put on a performance to instill trust. CTOs on product startups with little funding are more like "I'll hire a teammate that can put up with the mess I wrote". CTOs after the product startups have large investments are more like "I'll pretend I know what I'm doing and let the principal engineers actually run the tech and fix my legacy mess so we don't go out of business". CTOs at corpos are just C-suite politicians. Etc.

I find CTO as a title meaningless. At most it means they've got the signature powers.


I was CTO from 2 to 100, over 6 years, and led the core tech development the entire time. Tech Co-Founders certainly can and do code. I was a Principal Engineer before and after.

In a sense, you are right that the title is meaningless, and only means signature powers.

And you are right that other principals rewrote legacy stuff. But I also participated at the same level. I just knew I was only above due to some market luck lining up with my experience. Will never know exactly how much influence I had over that situation.

But I have a Jazz background. It is easy for me to respect equally everyone on the team. Sports can do the same, but I think music is probably more effective, because it is inherently more spectator than competitive. Musicians tend to make incredible team players, if they stuck with it for long enough. Many leaders in band are leaders in sports, in school. And the formal language abilities are already demonstrated in musicians.

Well, that went in a strange direction.


Yeah sorry I should've clarified it was kinda hyperbolic/tongue-in-cheek (but also meant to convey some truth as you've pointed out).

I always assumed it was a political role.

Author iterated with AI to build Postgres. Why not learn to use build systems? That’ll pay off big time in the medium timeframe.

I'm so fucking tired of people who had no interest in software development telling me software development is dead.

The author repeatedly states they have little knowledge of the tech they're using. But they're CERTAIN in what the industry will be in future.

Hubris


Right?

“Oh you’ll never have to do this tech stuff ever again! How amazing! Ai all the things!”

Like, ok great. Good for you. Leave the rest of us out of whatever mission-to-replace-some-thing-you-don’t-like.

Or even better, if you don’t like, go away and do something else. I’m not big into jogging, but I don’t go around telling runners that their hobby is redundant and that “nobody will run now that we have segways”.


>And with those new skills, your old skills will start to atrophy.

Skills don't work like muscles, please stop with this thinking model of the world. No one is going to fire you because you don't have the same speed of recall of language constructs and have to look more things up. Speed of coding is not the damn bottleneck.

Plus have a little faith in your brain that you could get back to that point if you wanted to.


This is absolutely not true. Think of all the studying that is required before applying to tech jobs these days. Sure someone won't fire you because you can't balance a binary tree by hand, but it certainly might exclude you from getting that next job, regardless of if it applies to their day to day workloads.

If people stop manually coding their ability to do so WILL atrophy. Take away the coding agents and you'll soon have a generation of graduates wondering why their tab complete isn't writing the entire feature for them.


> but it certainly might exclude you from getting that next job, regardless of if it applies to their day to day workloads

As others have mentioned this is the problem. Not being able to pull up a binary tree the most efficient way on the spot should not be the criteria to identify a good developer.


But then the question becomes what are good criteria to do that ( and what is the easy way to do it for HR )?

There’s a difference between not learning something, and not using something after learning it. For the latter the relearning process is fast. And often it may only requires a few hours of practices. There’s such thing as long term memory.

If you properly learn the skills, then refreshing your skills takes way less time than learning them the first time. This means you didn't really lose the skill.

>Sure someone won't fire you because you can't balance a binary tree by hand, but it certainly might exclude you from getting that next job

People that do this need their behavior changed. Testing people on quickly find-able implementations is an absolute circus. Obviously an exception if the job actually involves writing CS algorithms, but most of them do not.


> Think of all the studying that is required before applying to tech jobs these days.

Surely you realize this is the problem. I just landed a mid 6-figures job _without_ grinding leetcode. They’re out there. This game everyone plays is an abomination.


Surely you don't believe that there's a significant fraction of tech jobs that pay $500k?

No, not sure where you got that from.

Oh, I guess you were just sharing and this wasn't advice:

> I just landed a mid 6-figures job _without_ grinding leetcode. They’re out there. This game everyone plays is an abomination.


I’m way too humble and jaded to give advice to strangers on the internet, yes.

I offered an anecdote.


Surely you realize that "balancing binary tree" is just an example and the topic is not about recruitment process.

Right, but are you going to fix it by yourself? No. Is the point correct that most people wont remember half of the arcane rituals we're expected to perform on command only in an interview or in very specific parts of the job? Yes. Can we relearn it? Sure. Will skills atrophy if unused? Definitely, balancing a binary tree is not like riding a bike.

Remembering how to balance a binary tree is a complete waste of brainpower. We have lots of papers, books, and reference material that can show us this.

The arcane rituals of jumping though leetcode hoops reminds me of pledging a fraternity.


I learn new languages mostly for hirability or to explore new and fun concepts. After the realization that my job is solving problem and not using shiny tech, most of my reading has switched to what problems people are facing and possible solutions. And that usually involves a mix of theory (to understand) and best practices (as shortcuts).

There’s one optimization path that people seems to barely explore (other than editor nerds): navigation based on search and marking stuff for further actions. You see people using VS Code like notepad and they go on to complain that coding is tedious.


Sounds rewarding, any particular resource you would recommend that you've enjoyed?

Here is my current reading list (in CSV Format).

https://pastebin.com/BXwTjY54

As for the second part. Learn vim, emacs, kakoune, or try to be fluent in your current editor. The reason I put Vim and Emacs on top of the list is that they have powerful primitives for coding and they're not notepad patched with external tools.


> Speed of coding is not the damn bottleneck.

What you say would only be true in an anarchist island colony devoted to software craftsmanship where everyone is healthy and under 40 years old.


Skills 100% atrophy.

But they never atrophy to 0%. The body remembers. You're never restarting from scratch.

For sure. It comes back quicker the next time. “Muscle memory” is real.

This post is completely false

Is not the first time that I read this kind of narrative, actually appears pretty often in my LinkedIn timeline. We are now managers of agents and now we are CTO of agents… this is plain delusional, you can play with the toys all day long, a mud cake won’t sell very well. This CTO doesn’t code much, I agree, if you are a contributor you know that so called agents are as useful as google and stack overflow on steroids, nothing more than that unless you have no idea of what you are doing.

Agents are definitely more useful than Google and SO… have you tried pair programming with Claude 4 Opus and building something? It’s amazing.

I have, and it does a mediocre to poor job, even at trivial tasks using popular stacks and languages. IME, one most either start with a green field, or babysit every step with/as an expert, or both. Perhaps I've just been unlucky or my projects have too much debt.

I agree with the other reply. This is a tool like any other and it takes skill and practice to use it effectively. I'm learning how best to create and persist architectural guidance.

For example giving it a meta process to follow before making an edit. I've also had the models keep a document of architecture and architecture decisions.


You need to start with a green field as the LLM makes lots of assumptions about your code base, based on what it learned before. Doing it green field means less resistance from the LLM.

I think, based on that, Rust will be the most popular and powerful language by next year. Of course, I might be completely wrong.


I’ve used it greenfield and with existing indebted codebases and it’s remarkably good at both. You have to give it a good amount of context but that goes for a human worker too. I think devs who get bad results from AI generally prefer to be the ones coding and not managing and maybe take their own context for granted.

Most of my time spent coding is understanding and growing my context. By the time I’ve done that, writing the code is an afterthought. It takes longer to communicate this via prompt than to write the code myself.

I’ve heavily benefited from AI when I’m inexperienced or willing to trust the AI. The devs I’ve seen happy with AI either fit that use case or are subpar devs who don’t really understand what they’re doing. I have multiple coworkers who are the latter - code works for their single happy path and they’re on to the next thing.


Exactly. Having built a few non-trivial projects with Claude Code, you can act as a CTO. At least that is one semi feasible way of using it.

I would say in a generation or two, you could operate in a way where you rarely have to dive into the code.


> Having built a few non-trivial projects with Claude Code

Can you clarify what "non-trivial" means here? To me "non-trivial" is writing FoundationDB or Kafka from scratch.


I’ve been building a project in my spare time with CC and gotten at least 5x more done than I could have alone. Probably closer to 10. And it makes it easier to work when I’m kind of tired and would normally choose not to work on the project at all, because I can relax or do house chores while it crunches on a problem. It’s amazing.

That’s why I said it’s like it but on steroids. It is what is it’s.i didn’t try (insert here last hyped version of whatever) I tried the mainstream services and it’s definitely not like to be “a manager” or CTO of agents

Interesting that the author didn’t use AI to proofread his post. He’s not interested in coding very much but the English language doesn’t seem to be his forte either.

Author, if you’re reading this, “let’s” => “lets” (2 occurrences).


I'm doing something similar to the OP but I'm not a CTO but a "manager". Maybe the title is "we're all managers now". The CTO role is bigger than that.

And, we are not managing the LLMs. We are communicating with them. The better a developer gets it communication in general, the easier their life will be in the better job they will do for their company. The more they understand about what their customers need, the better job they will do building those tools. None of this is about managing. It's about learning how to communicate, how to ask the right questions and how to listen.

In practice I think it will more resemble a flood of script kiddies than CTOs. The average person isn't as thoughtful as the author, they just want to close their tickets with the least effort possible. Not that that attitude towards work is specific to tech workers.

So now the newly minted CTOs, get the same authority, credit, compensation, recognition as the real CTO for managing the hearless agents?

I see there is quite a lot of controversy in the comments here, as the most/majority people are technical/ICs.

However, at my current job & role, my manager has left (or taken quite long leave) 2nd time now. Although both me and my team are assigned to another manager in different region (BigTech), it is not the same thing...

Why I mention this: I am gonna avoid doing any and all managerial work. Because last time, I did a lot of managerial work without the benefits. Both in terms of reporting, keeping the team morale (happiness) up, keeping our interest above from a lot of inter-team fighting/prioritization, etc.

In turn, I got no appreciation or compensation out of it. Even I partially did the jobs of other people (collecting artifacts, reporting up to the chain, etc.) So, nobody would get any _bad_ performance review. (Or worse, lay-offs...)

But I agree with the author, I got no dopamine out of these. Yes I was solving some problems, but they were like package conflicts of NPM peer dependencies. Provided no value to me, no improvement for my own performance, and worse, no goal or direction at all!

PS: My team is a completely DevOps team, in Big-Tech terms, support team. What we do is the grunt-work of various other teams to keep them up-to-date, which is why, overall job satisfaction is quite below of the average...

Now, I am refusing to do the same work again. My manager is in parental leave since mid-June. He has _not_ been doing good job in terms of job-satisfaction and team morale since he has joined. I slowly degraded doing the low-key managerial job, and he has not been taking over. With the long-leave in process, I just stopped taking care of it.

Since I stopped doing the managerial grunt-work, 2 people already left from the team of, well, 8 engineers.

Since I am also taking over the work that has been done by the other engineers, I noticed couple of things: 1. The code quality is somewhat okay, but there are obvious "useless" AI generated areas. Similarly, commit messages yield little to no value, as the review process were only within the people who worked within the project. (ie, Several "fix bugs" commits back to back, yields no value) 3. People who left or stayed, has no recollection of the things I helped them with, problems I solved (ie, unblocking those stuck) and no appreciation for the "space" I was able to get to them. (Even though I was quite explicit with each person. 4. I am one of those special engineers where you can put me in any domain/language whatsoever and I will do a good/decent job at it. (jack of all trades, Swiss-army-knife, whatever you call it.) I also solve the issues as I go through with some bug-fixes, features, whatnot. 5. Product/project manager actively sabotages these tech-debt fixes, or the "refactor" of the "AI-generated" code to be a simpler, more-readable versions.

Which is why, unlike the CTO in question of the article, I started caring less and less about these. Now, I also produce code with AI-agents, as the leadership loves the AI-slop metrics.

At some point, these AI-generated code will fail to do something. We'll need to fix that, or replace that. This boils down to 2 different scenarios: 1. If this code is running an airplane, then it is a disaster. Maybe your engines will fail, you must crash-land somewhere at best. 2. If this code is running a rocket, then it already has a limited time anyway. Does not matter if has a memory leak at all. The lifetime is already so limited that the rocket will not even reach to the resource limits being hit.

I guess most of the leadership is currently betting on most problems being #2. Because software engineering going quite fast, rewrites are always at the next corner, what is the point of "maintaining" the codebase?

Meanwhile, I am not sure I will be there to solve more of an airplane problem when it occurs. I just wish best of luck with the AI-agents to the leaders who have just attached pair or rocket-boosters instead of actual jet-engines to an airliner!


I see there is quite a lot of controversy in the comments here, as the most/majority people are technical/ICs.

However, at my current job & role, my manager has left (or taken quite long leave) 2nd time now. Although both me and my team are assigned to another manager in different region (BigTech), it is not the same thing...

Why I mention this: I am gonna avoid doing any and all managerial work. Because last time, I did a lot of managerial work without the benefits. Both in terms of reporting, keeping the team morale (happiness) up, keeping our interest above from a lot of inter-team fighting/prioritization, etc.

In turn, I got no appreciation or compensation out of it. Even I partially did the jobs of other people (collecting artifacts, reporting up to the chain, etc.) So, nobody would get any _bad_ performance review. (Or worse, lay-offs...)

But I agree with the author, I got no dopamine out of these. Yes I was solving some problems, but they were like package conflicts of NPM peer dependencies. Provided no value to me, no improvement for my own performance, and worse, no goal or direction at all!

PS: My team is a completely DevOps team, in Big-Tech terms, support team. What we do is the grunt-work of various other teams to keep them up-to-date, which is why, overall job satisfaction is quite below of the average...

Now, I am refusing to do the same work again. My manager is in parental leave since mid-June. He has _not_ been doing good job in terms of job-satisfaction and team morale since he has joined. I slowly degraded doing the low-key managerial job, and he has not been taking over. With the long-leave in process, I just stopped taking care of it.

Since I stopped doing the managerial grunt-work, 2 people already left from the team of, well, 8 engineers.

Since I am also taking over the work that has been done by the other engineers, I noticed couple of things: 1. The code quality is somewhat okay, but there are obvious "useless" AI generated areas. Similarly, commit messages yield little to no value, as the review process were only within the people who worked within the project. (ie, Several "fix bugs" commits back to back, yields no value) 3. People who left or stayed, has no recollection of the things I helped them with, problems I solved (ie, unblocking those stuck) and no appreciation for the "space" I was able to get to them. (Even though I was quite explicit with each person. 4. I am one of those special engineers where you can put me in any domain/language whatsoever and I will do a good/decent job at it. (jack of all trades, swiss-army-knife, whatever you call it.) I also solve the issues as I go through with some bug-fixes, features, whatnot. 5. Product/project manager actively sabotages these tech-debt fixes, or the "refactor" of the "AI-generated" code to be a simpler, more-readable versions.

Which is why, unlike the CTO in question of the article, I started caring less and less about these. Now, I also produce code with AI-agents, as the leadership loves the AI-slop metrics.

At some point, these AI-generated code will fail to do something. We'll need to fix that, or replace that. This boils down to 2 different scenarios: 1. If this code is running an airplane, then it is a disaster. Maybe your engines will fail, you must crash-land somewhere at best. 2. If this code is running a rocket, then it already has a limited time anyway. Does not matter if has a memory leak at all. The lifetime is already so limited that the rocket will not even reach to the resource limits being hit.

I guess most of the leadership is currently betting on most problems being #2. Because software engineering going quite fast, rewrites are always at the next corner, what is the point of "maintaining" the codebase?

Meanwhile, I am not sure I will be there to solve more of an airplane problem when it occurs. I just wish best of luck with the AI-agents to the leaders who have just attached pair or rocket-boosters instead of actual jet-engines to an airliner!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: