Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)

It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)






At work we started calling this trend clippification for obvious reasons. In a way this aligns with your comment: The information provided by Clippy was not necessarily useless, nevertheless people disliked it because (i) they didn't ask for help (ii) and even if by any chance they were looking for help, the interaction/navigation was far from ideal.

Having all these popups announcing new integrations with AI chatbots showing up while you are just trying to do your work is pretty annoying. It feels like this time we are fighting an army of Clippies.


I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes, and I agree with you. The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.

I don't want shitty bolt-ons, I want to be able to give chatgtp/claude/gemini frontier models the ability to access my application data and make api calls for me to remotely drive tools.


> The AI features in non-AI-first apps tend to be awkward bolt-ons, poorly thought out and using low quality models to save money.

The weirdest location I've found the most useful LLM-based feature so far has been Edge with it's automatic tab grouping. It doesn't always pick the best groups and probably uses some really small model, but it's significantly faster and easier than anything that I've had so far.

I hope they do bookmarks next and that someone copies the feature and makes it use a local model (like Safari or Firefox, I don't even care).


Was automatic tab grouping missing from your life?

> I am a huge AI supporter, and use it extensively for coding, writing and most of my decision making processes

If you use it for writing, what is the point of writing in the first place? If you're writing to anyone you even slightly care about they should wipe their arse with it and send it back to you. And if it's writing at work or for work then you're just proving you are an employee they don't need.


I just wiped my arse with your reply, here it is, enjoy.

Did you have to brainstorm that response with chatgpt?

>and use it extensively for coding, writing and most of my decision making processes,

I'm curious, do you find it easier to climb stairs or inclines now that you've tossed your brain in the trash?


> and most of my decision making processes

Jesus F christ, please tell me you are trolling

https://time.com/7295195/ai-chatgpt-google-learning-school/


[flagged]


Can you please stop posting flamebait comments and crossing into personal attack? Your account has unfortunately been doing this repeatedly, and we're trying for something else here.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Meanwhile you aren't even using AI and you hallucinated the word "outsource" in their comment.

Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


Xss3 is paraphrasing. As CuriouslyC wrote:

> "I am a huge AI supporter, and use it extensively for [...] most of my decision making processes"


It's not a paraphrase, it's a misreading.

How do you get outsourcing from this? Maybe they're using it to organize their thoughts, explore alternatives. Nowhere do they say they're not still making the decisions themselves.


Nowhere did i say that either. You are misreading.

I said decision making process, as did they.

Nobody said that they are letting the AI make the decisions.


"outsource"

[flagged]


Is it yours?

They use it for their decision making process.

When you use a pen for your writing processes, are you outsourcing the process of writing to the pen? Or are you using it?

When the first thing you say to a stranger is an insult, I wonder, is that a domestically-produced decision? Doesn't seem very high-quality.


If you previously wrote by dipping your fingers in ink, then yes, youve outsourced that part of the process to the pen.

I get it, you love ai and youre desperate to defend replacing human thought with it.


Couldn’t agree more. There are awesome use-cases for AI, but Microsoft and Google needed to shove AI everywhere they possibly could, so they lost all sense of taste and quality. Google raised the price of Workspace to account for AI features no one wants. Then, they give away access to Gemini CLI for free to personal accounts, but not Workspace accounts. You physically cannot even pay Google to access Veo from a workspace account.

Raise subscription prices, don’t deliver more value, bundle everything together so you can’t say no. I canceled a small Workspace org I use for my consulting business after the price hike last year; also migrating away everything we had on GCP. Google would have to pay me to do business with them again.


There's nuance in that the better ways to add AI into these projects make less money and wouldn't deliver on the hype these companies are looking for.

But my real frustration is that with some thought the AI tools shoved in those apps could be useful but they’ve been rushed out and badly implemented.

google is sending pop-up sonyliv open emails suggesting that they will use our data and help us with a. i. which should not be accepted at all the pop-ups even don't disappear this is a real cheating and fraud

A "solution" looking for a problem.

> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.

And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.

And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.

And either way, all the people responsible for making all your technology worse every day will continue to get richer.


This is not an AI problem, this is a problem caused by extremely large piles of money. In the past two decades we have been concentrating money in the hands of people who did little more than be in the right place at the right time with a good idea and a set of technical skills, and then told them that they were geniuses who could fix human problems with technological solutions. At the same time we made it impossible to invest money safely by making the interest rate almost zero, and then continued to pass more and more tax breaks. What did we expect was going to happen? There are only so many problems that can be solved by technology that we actually need solving, or that create real value or bolster human society. We are spinning wheels just to spin them, and have given the reins to the people with not only the means and the intent to unravel society in all the worst ways, but who are also convinced that they are smarter than everyone else because they figured out how to arbitrage the temporal gap between the emergence of a capability and the realization of the damage it creates.

> This is not an AI problem, this is a problem caused by extremely large piles of money.

Those are two problems in this situation that are both bad for different reasons. It's bad to have all the money concentrated in the hands of a tiny number of losers (and my god are they losers) and AI as a technology is slated to, in the hands of said losers, cause mass unemployment, if they can get it working good enough to pass that very low bar.


Couldn’t agree more. The problem is when the party is over, and another round of centralizing wealth and power is done, we’ll be no wiser and have learnt nothing. Look at the debate today, it’s (1) people who think AI is useful, (2) people who think it’s hype and (3) people who think AI will go rogue. It’s like the bank robbers put on a TV and everyone watches it while the heist is ongoing.

Only a few bystanders seem to notice the IP theft and laundering, the adversarial content barriers to protect from scraping, the centralization of capital within the owners of frontier models, the dial-up of the already insane race to collect personal data, the flooding of every communication channel with AI slop and spam, and the inevitable impending enshittification of massive proportions.

I’ve seen the sausage get made, enough to know the game. They’re establishing new dominance hierarchies, with each iteration being more cynical and predatory, each cycle refined to optimally speedrun the rent seeking value extraction. Yes, there are still important discussions about the tech itself. But it’s the deployment that concerns everyone, not hypothetically, but right now.

Exhibit A: social media. In hindsight, what was more important: the core technologies or the business model and deployment?


> if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs

I think this is the key idea. Right now it doesn't work that well, but if it did work as advertised, that would also be bad.


That's the thing I hate most about the whole AI frenzy: If it doesn't work, it's horrible, and if it does work, it's also horrible but for different reasons. The whole thing is a giant shit sandwich, and the only upside is for the few already-rich people serving it to us.

And regardless of whether or not it works - it's pumping giant amounts of CO2 into the atmosphere which isn't a strictly local problem.

Any time a new technology makes people uncomfortable, someone pulls the CO₂ card. We've seen this with cryptocurrencies, electric cars, even the internet itself.

But curiously, the same people rarely question the CO₂ footprint of things like gaming, streaming, international sports, live concerts, political campaigns, or even large-scale scientific research. Methane-fueled rockets and the LHC don't exactly run on solar-powered calculators, yet they're culturally or intellectually "approved" forms of emission.

Yes, AI consumes energy. So does everything else we choose to value. If we're serious about CO₂, then we need consistent standards — not just selective outrage. Either we cut fairly across the board, or we focus on making electricity cleaner and more sustainable, instead of trying to shame specific technologies into nonexistence (which, by the way, never happens).


Nice whataboutism, except if you had read any of my other comments in this topic you'd know that I think all of those activities need to be taken into account.

We should be evaluating every activity on benefit versus detriment when it comes to CO2, and AI hasn't passed the "more benefit than harm" threshold for most people paying attention.

Perhaps you can help me here since we seem to be on the topic - how would you rate long term benefit versus long term climate damage of AI as it exists now?


Calling “whataboutism” is often just a way to derail those providing necessary context. It’s a rhetorical eject button — and more often than not, a sign someone isn’t arguing in good faith. But just on the off-chance you are one of "the good ones": Thank you for the clarification - and fair enough, I appreciate that you're applying the same standard broadly. That's more intellectually honest than most.

Now, do you also act on that in your private life? How beneficial, for instance, is your participation in online debate?

As for this phrase — "most people paying attention" — that’s weasel wording at its finest. It lets you both assert a consensus and discredit dissent in a single stroke. People who disagree? They’re just not paying attention, obviously. It’s a No True Scotsman — minus the kilts.

As for your question: evaluating AI's long-term benefit versus long-term climate cost is tricky because the landscape is evolving fast. But here’s a rough sketch of where I currently stand.

Short-term climate cost: Yes, significant - especially in training large models and the massive scaling of data centers. But this is neither unique to AI nor necessarily linear; newer models (like LoRA-based systems) and infrastructure optimizations already aim to cut energy use significantly.

Short-term benefit: Uneven. Entertainment chatbots? Low direct utility — though arguably high in quality-of-life value for many. Medical imaging, protein folding, logistics optimization, or disaster prediction? Substantial.

Long-term benefit: If AI continues to improve and democratize access to knowledge, diagnosis, decision-making, and resource allocation — its potential social, medical, and economic impact could be enormous. Not just "nice-to-have" but truly transformative for global efficiency and resilience.

Long-term harm: If AI remains centralized, opaque, and energy-inefficient, it could deepen inequalities, increase waste, and consolidate power dangerously.

But even if AI causes twice the CO₂-output it causes today, and would only be used for ludicrous reasons, it pales to the CO₂ pollution causes by a single day of average American warfighting ... while still - differently from war fighting - having a net-positive outcome to AI users' lives.

So to answer directly:

Right now, AI is somewhere near the threshold. It’s not obviously "worth it" for every observer, and that’s fine. But it’s also not a luxury toy — not anymore. It’s a volatile but serious tool, and whether it tips toward benefit or harm depends entirely on how we build, govern, and use it.

Let me turn the question around: What would you need to see — in outcomes, not marketing — to say: "Yes. That was worth the carbon."?


> Any time a new technology makes people uncomfortable, someone pulls the CO₂ card. We've seen this with cryptocurrencies, electric cars, even the internet itself.

I actually don't recall people "pulling the CO2 card" for the Internet. I do recall people doing it for cryptocurrency; and they were correct to do so. Even proof of stake is still incredibly energy inefficient at handling transactions. VISA handles thousands for what a proof-of-stake chain takes to handle a handful, and they do it faster to boot.

Electric cars don't contribute much CO2, so I don't recall much of that either. They do however have high particulate pollution amounts due to weighing considerably more (especially American-centric models like Teslas and the EV Hummer/F-150 Lightning) which aren't nothing to consider, and more to the point, electric cars do not solve the ancillary issues with infrastructure, like traffic congestion and cars effectively being a tax on everyone in a car-centric society who wants to be able to live. The fact that we all have to spend thousands every year on metal boxes we don't much care about just to be able to get around and have that box sit idle the vast majority of the time is ludicrously inefficient.

> But curiously, the same people rarely question the CO₂ footprint of things like gaming, streaming, international sports, live concerts, political campaigns, or even large-scale scientific research.

I have to vehemently disagree here. All scientific research, for starters, has to take environmental impact into account. Among other things that's why nobody in Vegas is watching nuclear tests anymore.

For another, people have long criticized numerous pop celebrities for being incredibly cavalier with the logistics for their concerts, and political figures have received similar criticism.

International sports meanwhile have gotten TONS of bad press for how awful it is that we have to move the stupid olympics around each year, both in the environmental sense, and the financial one since hosting practically renders a non-western country destitute overnight. Not even going into Qatar's controversial labor practices in building theirs.

> If we're serious about CO₂, then we need consistent standards — not just selective outrage. Either we cut fairly across the board, or we focus on making electricity cleaner and more sustainable, instead of trying to shame specific technologies into nonexistence (which, by the way, never happens).

No we don't. We can say, collectively, that the cost of powering gaming PC's, while notable, is something we're okay with, and conversely, powering plagiarism machines is not. Or, as people are so fond of saying here, let the market decide. Charge for AI services what they actually cost to provide plus profit, and see if the market will bear it. A lot of the interest right now is based on the fact that most of it is completely free, or being bundled with existing software, which is not a stable long-term solution.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: