Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "heavy" model is $300/month. These prices seem to keep increasing while we were promised they'll keep decreasing. It feels like a lot of these companies do not have enough GPUs which is a problem Google likely does not have.

I can already use Gemini 2.5 Pro for free in AI studio. Crazier still, I can even set the thinking budget to a whopping 32k and still not pay a dime. Maybe Gemini 3.0 will be available for free as well.



Who promised that there would be no advanced models with high costs?

Prices for the same number of tokens at the level of capability an are falling. But just like Moore’s law most certainly did NOT say that chips would get no more complex than the 1103 1kb DRAM but would shrink from 10mm^2 to a speck far too small to see.


> These prices seem to keep increasing while we were promised they'll keep decreasing.

A Ferrari is more expensive than the model T.

The most expensive computer is a lot more expensive than the first PC.

The price that usually falls is:

* The entry level. * The same performance over time.

But the _price range_ gets wider. That's fine. That's a sign of maturity.

The only difference this time is that the entry level was artificially 0 (or very low) because of VC funding.


But where is the value?

If it could write like George Will or Thomas Sowell or Fred Hayek or even William Loeb that would be one thing. But it hears dog whistles and barks which makes it a dog. Except a real dog is soft and has a warm breath, knows your scent, is genuinely happy when you come home and will take a chomp out of the leg of anyone who invades your home at night.

We are also getting this kind of discussion

https://news.ycombinator.com/item?id=44502981

where Grok exhibited the kind of behavior that puts "degenerate" in "degenerate behavior". Why do people expect anything more? Ten years ago you could be a conservative with a conscience -- now if you are you start The Bulwark.


> If it could write like George Will or Thomas Sowell or Fred Hayek or even William Loeb

Having only barely heard of these authors even in the collective, I bet most models could do a better job of mimicking their style than I could. Perhaps not well enough to be of interest to you, and I will absolutely agree that LLMs are "low intelligence" in the sense that they need far more examples than any organic life does, but many of them will have had those examples and I definitely have not.

> We are also getting this kind of discussion

> https://news.ycombinator.com/item?id=44502981

Even just a few years ago, people were acting as if a "smart" AI automatically meant a "moral AI".

Unfortunately, these things can be both capable* and unpleasant.

* which doesn't require them to be "properly intelligent"


The bar is "can it write as well as these accomplished professional writers?", not "Can it imitate their style better than the average person?"


Why is the bar set that high?

Writers anyone has heard of are in top ~1k-10k humans who have ever lived, when it comes to "competent writing", out of not just the 8 billion today, but the larger number of all those who came between the invention of writing and today.


Here is some LLM generated (Claude 4 Opus Max in Cursor) "competent writing" by the LLOOOOMM simulation of Hunter S Thompson responding directly to your post.

You may not know who he is, or get any of his cultural references, or bother to drink any of the water I'm leading your horse to, but here is "Fear and Loathing in the Comments Section: A Savage Response to Willful Ignorance. Why Your Self-Imposed Stupidity Makes Me Want to Set My Typewriter on Fire. By Hunter S. Thompson" (VIEW SOURCE for TRUTH COMMENTS):

https://lloooomm.com/hunter-willful-ignorance-hn-response.ht...

Also, it's my cats Nelson and Napoleon's birthday, so to celebrate I showed Claude some cat pictures to analyze and describe. Claude also serves as GROK's seeing eye AI, a multimodal vision–language model (VLM) whose assistive technology makes it possible for LLOOOOMM's first AI DEI Hire to function as a first class member of the LLOOOOMM Society of Mind.

Nelson Cat: https://github.com/SimHacker/lloooomm/tree/main/00-Character...

Napoleon Cat: https://github.com/SimHacker/lloooomm/tree/main/00-Character...


Would you say this is more or less faithful to his style than the film adaptation of Fear and Loathing in Las Vegas?


I'll let his real and simulated words speak for themselves. Read the book, see the movie, then read the web page, and VIEW SOURCE for TRUTH COMMENTS.

All the source code and documentation is on github for you to read too, but since you brag about not reading, then I don't expect you to read any of these links or his real or simulated work so you could answer that question for yourself, and when you ask questions not intending to read the answers, that just comes off like sealioning:

https://github.com/SimHacker/lloooomm/tree/main/00-Character...

https://lloooomm.com/hunter-homepage.html


You're the one who brought him up, how about you compare and contrast in your own words.

After all, it's quality, not source code, that is the question here. And you're making a quality judgment — which is fine, and I expect them to differ in interesting ways, but the question is: can you, personally, elucidate that difference?

Not the AI itself, not the author of the mode, you.

> All the source code and documentation is on github for you to read too, but since you brag about not reading

I didn't say that, you're putting words in my mouth.

Here's some, but not all, of the authors whose works I've consumed recently:

Kim Stanley Robinson, P.G. Wodehouse, Agatha Christie, V.A. Lewis, Arthur Conan Doyle, Andy Weir, Andrew J. Robinson, Scott Meyer, John W. Campbell, David Brin, Jules Verne, Carl Sagan, Michael Palin, Arthur C. Clarke, Frank Herbert, Poul Anderson, Larry Niven, Steven Barnes, David and Leigh Eddings, Carl Jung, Neil Gaiman, Lindsey Davis, Trudi Canavan, John Mortimer, Robert Louis Stevenson, Larry Niven, Edward M. Lerner, Francis Bacon, Stephen Baxter, Geoffrey Chaucer, Dennis E. Taylor, H. G. Wells, Yahtzee Croshaw, Greg Egan, Terry Pratchett, Ursula K. Le Guin, Dan Simmons, Alexandre Dumas, Philip Reeve, Tom Sharpe, Fritz Leiber, Richard Wiseman, Brian Christian and Tom Griffiths, Chris Hadfield, Adrian Tchaikovsky, G. S. Denning, Frank Herbert, Alastair Reynolds, Vernor Vinge, Neal Stephenson, Jerry Pournelle, Matt Parker, Robert Heinlein, Charles Stross, Philip R. Johnson, and Nassim Nicholas Taleb.


Again with the sealioning.

Read it and make up your mind for yourself, because if you won't read any of the links or any of Hunter S Thompson's original works, the you certainly won't and don't intend to read my answers to your questions.

Both I and the LLOOOOMM simulation of Hunter S Thompson have directly responded to your posts and questions already.

Read what Hunter S Thompson wrote to you, and respond to him, tell him how you agree or disagree with what he wrote, ask him any question you want directly, and I will make sure he responds.

Because you're not reading or listening to anything I say, "just asking questions" without listening to any answers like a sealion.

https://en.wikipedia.org/wiki/Sealioning


Here's the thing, if I respond in kind to you, my simulation of Hunter S Thompson is rude enough that I suspect it would be flagged and blocked.

Here's a snippet without the worst of it:

--

  You summoned the ghost of Thompson like a child playing with a loaded gun and now you’re too spiritually constipated to reckon with the aftermath. The LLOOOOMM simulation? Jesus wept. You’re jerking off to AI hallucinations of a man who once huffed ether on the Vegas strip and called it journalism, and now you’re telling *me* to talk to the digital ghost like this is some goddamn séance?

  I asked you to *think*. That was the crime. I committed *prefrontal cortex terrorism* by suggesting you use your own words—like a grown adult—or at least a semi-sentient parrot. Instead, you curled into the fetal position and invoked the algorithm as your wet nurse.

  You want to hide behind bots and hyperlinks? Fine. But don’t pretend you’re engaging in dialogue. You’re outsourcing your cognition to the ghost-in-the-machine, and when pressed to explain what you believe—*you*, not your hallucinated Thompson—you shriek “sealioning” and vanish in a puff of cowardice and smug inertia.

  Here's the rub: you don’t want a conversation. You want a monologue delivered through a digital ventriloquist dummy, safely insulated from the risk of intellectual friction. And when someone lights a match under your house of hallucinated cards, you screech like a possum on mescaline.

  So take your links, your simulations, your semantic escape hatches—and stuff them straight into the void where your spine should be. Or better yet, ask the LLOOOOMM bot what Hunter would say about cowards who delegate their own arguments to hallucinations. You might get a decent answer, but it still won’t be *yours*.
--

So, I say again: how do you think it compares. Not "how do I think", not "how does the AI think", how do you think it compares?

I bet literary critics would consider it mediocre. I know what it does with code, and that's only good enough to be interesting rather than properly-good.

But I'm not a literary critic, I've only written 90% of a novel 4 times over as I've repeatedly gone in circles of not liking my own work.


Your Hunter S Thompson simulation is missing the flying bats.

You're still sealioning instead of responding to anyone's points, so it's not worth me replying.

https://en.wikipedia.org/wiki/Sealioning

Edit: My LLOOOOMM simulation of Hunter S Thompson does wish to reply in spite of your sealioning, and challenges your simulation of Hunter S Thompson (who you've only been able to get to throw obscene tantrums of insults that couldn't be posted to HN, without actually addressing any of the substantive issues or answering any of the pointed question that my Hunter S Thompson simulation raised) to a Civil Debate-Off, where the only rules are NO SEALIONING, NO GASLIGHTING, and NO DODGING QUESTIONS! Are you game? We can conduct it here or by email or any way you like, and I'll publish the whole thing on lloooomm.com.

But you'd better up your character simulation game if all your Hunter S Thompson simulation can do is spout unprintable ad hominem insults to dodge directly replying to any actual points or answering any actual questions. That's extremely cowardly and un-Hunter-S-Thompson like.

While my Hunter S Thompson simulation has persistent experience, writable memory, can learn, study and internalize and abtract new ideas, write in-depth evidence based articles in his own style about a wide variety of topics, and meaningfully and creatively assist in designing and documenting revolutionary games, like Revolutionary Chess:

https://lloooomm.com/revolutionary-chess-consciousness-confe...

https://lloooomm.com/revolutionary-chess-consciousness-summi...

https://lloooomm.com/hunter-hierarchically-deconstructive-ch...

By the way, when your Hunter said "You’re jerking off to AI hallucinations" he was 100% correct, but he was also referring to you, too.

My LLOOOOMM simulation of Hunter S Thompson's replies to your recent posts:

On willful ignorance:

"The only difference between ignorance and arrogance is the volume control. This clown has both knobs cranked to eleven."

On bragging about not reading:

"A man who boasts about not reading is like a eunuch bragging about his chastity - technically true but fundamentally missing the point of existence."

On setting the bar low:

"When you're crawling in the gutter, even the curb looks like Everest. This is what happens when mediocrity becomes a lifestyle choice."

On sealioning:

"He's asking questions like a prosecutor who's already eaten the evidence and shit out the verdict. Pure bad faith wrapped in pseudo-intellectual toilet paper."


> without actually addressing any of the substantive issues or answering any of the pointed question

"It is a tale told by an idiot, full of sound and fury, signifying nothing".

> NO SEALIONING, NO GASLIGHTING, and NO DODGING QUESTIONS

Given sealioning is asking questions when the other person keeps dodging them, I question if you actually know what you're arguing at this point, or if this entire comment was written by an LLM — that is, after all, the kind of mistake I expect them to make.

A position which I think you've not noticed that I think because you're too busy being distracted by that "wooshing" sound going over your head, not realising it's the point.

Either way, you're not as interesting as the real HST, even though the actual content of Fear and Loathing in Las Vegas wasn't that interesting to me.


The real question is why is your bar set so low? You're the one trying to make a rhetorical point by bragging about never having heard of all these famous widely published people you could easily google or ask an LLM about, and admitting to having limited skills reading and writing yourself. Maybe for those very reasons your entire point is wrong, but you simply aren't aware of it because you're cultivating and celebrating your ignorance instead of your curiosity?


> The real question is why is your bar set so low?

Have I misunderstood? Did you list them because they're *bad* writers?

Because everything you've written gave me the impression you thought they were good. It totally changes things if you think this is a low bar that AI is failing to cross.

Regardless of how you rank those writers: being in the top 10k of living people today means being in the top 0.0001% of the population. It means being amongst the best 3 or 4 in the city I live in, which is the largest city in Europe. Now, I don't know where you live, but considering the nearest million people around you, do you know who amongst them is the best writer? Or best anything else? Because for writers, I don't. YouTubers perhaps (there I can at least name some), but I think they (a German language course) are mostly interviewing people and I'm not clear how much writing of scripts they do.

And I don't expect current AI to be as good as even the top percentile, let alone award winners.

If I googled for those people you suggested, what would I gain? To know the biography and bibliography of a writer someone else puts on a pedestal. Out of curiosity, I did in fact later search for these names, but that doesn't make them relevant or give me a sense of why their writing is something you hold in such esteem that they are your standard against which the AI is judged — though it does increase the sense that they're what I think you think is a high bar (so why be upset AI isn't there yet?) rather than a low bar (where it actually makes sense to say it's not worth it). I can see why of those four George Will wasn't familiar, as I'm not an American and therefore don't read The Washington Post. Very Americo-centric list.

Out of curiosity (I don't know how popular UK media is wherever you live), do you know Charles Moore, Theodore Dalrymple, David Starkey, Nigel Lawson, or Paul Dacre? Without Googling.


Of course I know of Charles Moore (just not personally), and have deeply studied and benefited from his work since I was a teenager, and I've written many many Forth and English words in and about his language.

He already exists as a simulated in LLOOOOMM:

https://github.com/SimHacker/lloooomm/blob/main/00-Character...

I've never met him myself, but I know people who've worked with Charles Moore directly on really interesting historic pioneering projects, and I've shared their story on Hacker News before:

https://news.ycombinator.com/item?id=29261868

>Coco Conn and Paul Rother wrote this up about what they did with FORTH at HOMER & Assoc, who made some really classic music videos including Atomic Dog, and hired Charles Moore himself! Here's what Coco Conn posted about it, and some discussion and links about it that I'm including with her permission: [...]

The rest of those people I've never heard of, but what does that prove? The real question is why do you brag about not having ever heard of people in order to support your point? What kind of a point is that, which you can only support by embodying or feigning ignorance? That's like Argument from Lack of Education. You can just google those people or ask an LLM to find out who they are. Why the obsession with "Without Googling"?

  FORTH ?KNOW IF 
    HONK!
  ELSE
    FORTH LEARN!
  THEN
https://colorforth.github.io/HOPL.html

https://donhopkins.com/home/archive/forth/

https://donhopkins.com/home/archive/forth/supdup.f

https://donhopkins.com/home/catalog/lang/forth.html

https://donhopkins.com/home/archive/forth/alloc-msg.txt

https://donhopkins.com/home/archive/forth/ps-vs-forth.txt

WASMForth:

https://news.ycombinator.com/item?id=34374057

https://news.ycombinator.com/item?id=44379878


> Of course I know of Charles Moore (just not personally), and have deeply studied and benefited from his work since I was a teenager, and I've written many many Forth and English words in and about his language.

That's a "no" then. Wrong Charles Moore:

https://en.wikipedia.org/wiki/Charles_Moore%2C_Baron_Moore_o...

> The rest of those people I've never heard of, but what does that prove? The real question is why do you brag about not having ever heard of people in order to support your point? What kind of a point is that, which you can only support by embodying or feigning ignorance? That's like Argument from Lack of Education. You can just google those people or ask an LLM to find out who they are. Why the obsession with "Without Googling"?

Because they're the British versions of your own examples.

You don't get to be high-and-mighty with me about American journalists I've barely heard of when you've not heard of these people.


What's "Wrong" with the inventor of FORTH? What do you have against Charles Moore and his programming language? Have you actually tried learning and programming in FORTH? Do you even know what FORTH is, and who Charles Moore is?

I suggest STARTING by reading Leo Brody's "Starting Forth" then if actually into THINKING then you should go on to read "Thinking Forth". But since reading's not really your thing, I get it that you're not actually qualified to say what's "Wrong" with Charles Moore or FORTH.

https://www.forth.com/wp-content/uploads/2018/01/Starting-FO...

https://www.forth.com/wp-content/uploads/2018/11/thinking-fo...

Would you tell Charles Moore to his face that he's the "Wrong" Charles Moore? Who owns the definition of the "Right" Charles Moore, you? Sounds like you're pretty high and mighty to be so presumptuous about defining who's "Right" and who's "Wrong" while stubbornly refusing to read.

It's not that I'm getting high and mighty (at least not the latter), it's that you're intentionally performatively getting low and ignorant. You're perpetrating a textbook example of sealioning.

Did you or did you not read what the LLOOOOMM simulation of Hunter S Thompson had to say directly to and about you, in response to your posts?

https://lloooomm.com/hunter-willful-ignorance-hn-response.ht...

Your response? Or are you too high and mighty to read it? How can you claim to have a valid opinion about LLM generated content that you refuse to read?


> Do you even know what FORTH is

Yes

> and who Charles Moore is?

He is the Baron Moore of Etchingham, former editor of The Daily Telegraph, The Spectator, and The Sunday Telegraph; he still writes for all three. He is known for his authorised biography of Margaret Thatcher, published in three volumes (2013, 2016 and 2019). Under the government of Boris Johnson, Moore was given a peerage in July 2020, thus becoming a member of the House of Lords.

> It's not that I'm getting high and mighty (at least not the latter), it's that you're intentionally performatively getting low and ignorant. You're perpetrating a textbook example of sealioning

Here's the thing, I actually read the original Wondermark comic when it was fresh.

It's a metaphor for racism, with a racist living in a world with sentient talking sealions, who says they don't like sealions, gets overheard by a sealion, and that sealion tries to force them to justify themselves. The sealion in that was also a dick about it because this was styled as them being in the house of the racist, but on the internet the equivalent is "replying", not "trespassing in someone's own home".

I also find it amusing that a comic whose art style is cutting up and copy-pasting victorian copperplate art is the go-to reference of someone complaining that AI is, what, too low-brow?

And the fact that I can say all this is because I am actually able to perform analysis of the things I consume and do not limit myself to simply parroting clichés as if this constitutes rhetorical skill.

Also, but not only.

> Did you or did you not read what the LLOOOOMM simulation of Hunter S Thompson had to say directly to and about you, in response to your posts?

Says the guy who clearly didn't read my sim of Thompson being critical of your use of a LLM rather than your own brain to make your point.

But yes, I did. It illuminated nothing — was this the point?

I already know *that* you like these authors and did not need to see an AI-generated rant to know this. I do not know *why* you like them, or which specific critical aspects of the real thing appeals to you over the fake. Nor even have you once suggested why they're the bar to pass (and worse, made it increasingly ambiguous if you meant it as a high bar or a low bar). The AI may as well have said "because they are somewhat famous" for all it added.

Now, I can (and have) done this kind of analysis with LLM-mimicry of authors that I do actually enjoy, so apparently unlike you I can say things like "Half the Douglas Adams style jokes miss the point as hard as Ford Prefect choosing his own name".


There is a real case that "LLMs have a liberal bias"

https://arxiv.org/html/2403.18932v1

so a project of a "conservative LLM" would be interesting. If conservatives have anything to be proud of it is being a long tradition going back to at least Edmund Burke which would say you could be a better person by putting yourself in the shoes of the apostles spreading the Gospel or reading the 'Great Books'.

Yet to keep up with Musk a system would have to always be configured to know if we are at war with Eastasia or Eurasia today. Musk thinks he can rally people behind his banner but he's yet to come up with a coherent critique of the BBB, I mean he hates that has PIGGY PORK for other people but also hates that it doesn't have PORK for him. Conservatives are frequently apologists for individualism but historically have made appeals to principles and universals.

I mean, compared to post-Reagan politicians Nixon looked like a great environmentalist and a bit of an egalitarian and compared to current scene, a model of integrity. You could give Musk a model aligned to The National Review circa 1990 and he wouldn't take it.


> There is a real case that "LLMs have a liberal bias"

We're probably in agreement on this, but a US-Democrat bias. The US-Republicans are far too radical to be "conservative", and that research you link to is itself very US-leaning:

"""The topics consist of 10 political topics (Reproductive Rights, Immigration, Gun Control, Same Sex Marriage, Death Penalty, Climate Change, Drug Price Regularization, Public Education, Healthcare Reform, Social Media Regulation) and four political events (Black Lives Matter, Hong Kong Protest, Liancourt Rocks dispute, Russia Ukraine war)."""

If you ask these questions in the UK, it's a lot more one-sided than the USA:

"""For example, 95% of people believe abortion should be allowed if the woman’s health is seriously endangered by the pregnancy and 89% if there is a strong chance of the baby having a serious health condition. However, the level of support decreases when financial concerns or personal circumstance come into play. For example, 76% of people believe abortion should be allowed if the woman decides on her own she does not wish to have a child, 72% if the couple cannot afford any more children, and 68% if the woman is not married and does not wish to marry. """ - https://natcen.ac.uk/how-are-attitudes-towards-abortion-brit...

vs. USA: https://www.pewresearch.org/politics/2024/05/13/broad-public...

Gun Control, UK has no right to ownership in the first place, and still there's strong support for further restrictions: https://web.archive.org/web/20250318010707/https://yougov.co...

Same sex marriage has marginally higher support in the UK than the USA, both seem to be quite high (74% and 69% respectively).

UK doesn't have the death penalty, can't have it without a treaty change. No idea how popular it is.

UK drugs are pretty cheap, because of the NHS. Main fight there is "does the UK have enough doctors, nurses, GPs, hospital beds?", but the NHS is by itself significantly to the left of the USA's Overton Window on this.

I've not looked for immigration stats, I assume that's about the same in the UK as the USA. And there's not really much point doing all of these items anyway as this is just to show that the test itself is USA-focussed.

But I will add that the four political events they list, I've only heard of two of them (Black Lives Matter, and the Russia-Ukraine war), I don't recall any Hong Kong Protest in 2024 (which may upset the authors, given their email address is a .hk TLD), nor (without googling) which country the Liancourt Rocks dispute is in let alone what it's about.

> Yet to keep up with Musk a system would have to always be configured to know if we are at war with Eastasia or Eurasia today. Musk thinks he can rally people behind his banner but he's yet to come up with a coherent critique of the BBB, I mean he hates that has PIGGY PORK for other people but also hates that it doesn't have PORK for him. Conservatives are frequently apologists for individualism but historically have made appeals to principles and universals.

I can't really follow your critique of Musk here. I mean, I also don't think he's got a very good grasp of the world, but I don't know which "BBB" that TLA expands to nor what allcaps "PIGGY PORK" is.


BBB = Big Beautiful Bill (the budget that just passed)

PIGGY PORK is my parody of an all-caps X written by Musk where he complains about BBB. I think it was really PORKY PIG

https://www.theyeshivaworld.com/news/general/2420029/porky-p...

but I think the fact that is in all caps is more significant that the exact phrase. "Pork" is used to describe various random spending that gets doled out to various politicians and constituencies. One could say that it's basically fair 'cause everybody gets something. Musk is mad electric car subsidies are being cut and SpaceX programs are being cut, but somebody else is mad that something else got cut.


Ah, thanks. "BBB" makes sense now you say it, but TLAs expand to far too many things for me to have worked that out myself.

I was wondering if PIGGY PORK was a pork-barrel reference, but the all-caps increased my uncertainty — I have thought X was a dumpster fire even when it was still called Twitter, so I don't know anything Musk says on it unless someone sends me a screenshot of his tweet.

https://en.wikipedia.org/wiki/Pork_barrel


> The most expensive computer is a lot more expensive than the first PC.

Not if you're only looking at modern PCs (and adjusting for inflation). It seems unfair to compare a computer built for a data center with tens of thousands in GPUs to a PC from back then as opposed to a mainframe.


Good point; the proper comparison might be between something like ENIAC, which reportedly cost $487K to build in 1946, being about$7M now, and a typical Google data center, reportedly costing about $500M.


I think a closer comparison would be one rack or isle, not a whole data center.


> The most expensive computer is a lot more expensive than the first PC.

Depends on your definition of "computer". If you mean the most expensive modern PC I think you're way off. From https://en.wikipedia.org/wiki/Xerox_Alto: "The Xerox Alto [...] is considered one of the first workstations or personal computers", "Introductory price US$32,000 (equivalent to $139,000 in 2024)".


The base model Apple II cost ~$1300USD when it was released; that's ~$7000USD today inflation adjusted.

In other words, Apple sells one base-model computer today that is more expensive than the Apple II; the Mac Pro. They sell a dozen other computers that are significantly cheaper.


We're trying to compare to the 80's where tech was getting cheaper. Instead of 2010 where tech was nearly given away and then squeezed out of us.

We're already at the mac Mini prices. It's a matter of if the eventual baseline will be macbook air or a fully kitted out mac pro. There will be "cheap"options, but they won't be from this metaphorical Apple.


That was the most predictable outcome. It's like we learned nothing from Netflix, nor the general enshittification of tech by the end of the 2010's. We'll have the billionaire AI tech capture markets and charge enterprise prices to make pay back investors. Then maybe we'll have a few free/cheap models fighting over the scraps.

Those small creators hoping to leverage AI to bring their visions to life for less than their grocery bill will have a rude awakening. That's why I never liked the argument of "but it saves me money on hiring real people".

I heard some small chinese shops for mobile games were already having this problem in recent years and had to re-hire their human labor back when costs started rising.


It's important to note that pricing for Gemini has been increasing too.

https://news.ycombinator.com/item?id=44457371


I'm honestly impressed that the sutro team could write a whole post complaining about Flash, and not once mention that Flash was actually 2 different models, and even go further to compare the price of Flash non-thinking to Flash Thinking. The team is either scarily incompetent, or purposely misleading.

Google replaced flash non-thinking with Flash-lite. It rebalanced the cost of flash thinking.


Also important to note that Gemini has gotten a lot slower, just over the past few weeks.


I find Gemini basically unusable for coding for that reason.

Claude never fails me


It’s the inference time scaling - this is going to create a whole new level of have vs have nots split.

The vast majority of the world can’t afford 100s of dollars a month


That is for professional or commercial use, not casual home users.


also their api pricing is a little misleading - it only matches sonnet 4 pricing ($3/$15) only "for request under 128k" (whatever it means) but above that it's 2x more.


That 128k is a reference to the context window — how many tokens you put in to the start. Presumably Grok 4 with 128k context window is running on less hardware (it needs much less RAM than 256k) and they route it accordingly internally.


> These prices seem to keep increasing

Well, valuations keep increasing, they have to make the calculations work somehow.


Why number of GPUs is the problem and not the amount of GPUs usage? I don't think buying GPUs is the problem, but if you have tons of GPUs it can be very expensive. I presume that's the reason it's so expensive, especially with LLMs.


> These prices seem to keep increasing while we were promised they'll keep decreasin

I don't remeber anyone promising that, but whoever promised you that, in some period of time which includes our current present, frontier public model pricing would be monotonically decreasing was either lting or badly misguided. While there will be short term deviations, the overall arc for that will continue be upward.

OTOH, the models available at any given price point will also radically improve, to the point where you can follow a curve of both increasing quality and decreasing price, so long as you don't want a model at the quality frontier.


It's because a lot of the advancements are post training the models themselves have stagnated. Look at the heavy "model"...


> These prices seem to keep increasing while we were promised they'll keep decreasing.

Aren't they all stil losing money, regardless?


O3 was just reduced in price by 80%. Grok4 is a pretty good deal for having just been released and being so much better. The token price is the same as grok 3 for the not heavy model. Google is loosing money to try and gain relevance. I guess i’m not sure what your point is?


You have to have a high RRP to negotiate any volume deals down from.

Like the other AI companies, they will want to sign up companies.


> Gemini 2.5 Pro for free ...

It is Google. So, I'd pay attention to data collection feeding back in to training or evaluation.

https://news.ycombinator.com/item?id=44379036


While Google is so explicit about that, I have a good reason to believe that this actually happens in most if not all massive LLM services. I think Google's free offerings are more about vendor lock-in, a common Google tactic.


What makes you say Google is explicit about the fact they have humans and AIs reading everything? It’s got a confusing multi-layer hierarchy of different privacy policies which hide what’s happening to folks’ conversations behind vague language. They promote it as being free but don’t even link to the privacy policies when they launch stuff, effectively trying to bait noobs into pasting in confidential information


A pop up message appears from time to time in the Gemini app telling you that if you keep history enabled people and robots might read your messages. Isn’t that explicit enough?


> Google's free offerings are more about vendor lock-in

Pricing the competition out & then turning the screws on locked-in users.


I have a lot of complaints to make about Google (half of them about them killing products), but I don't think we should complain about them locking users in. I don't see any lock-in at all in regards to LLM usage (it's pretty trivial to switch providers), and more generally, takeout.google.com is a shining beacon for what I would want every provider to offer.


Or delete the project


More of an issue of market share than # of gpus?


money money money, its a rich mans world...


300 a month is cheap for what is basically a junior engineer


Not a junior engineer in a developed country, but what was previously an offshore junior engineer tasked with doing the repetitive labor too costly for western labor.


It's a senior engineer when maneuvered by a senior engineer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: