Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> An innovative product with no means of significant revenue generation.

OpenAI has annualized revenue of $20bn. That's not Google, but it's not insignificant.





It is insignificant when they're spending more than $115bn to offer their service. And yes, I say "more than," not because I have any inside knowledge but because I'm pretty sure $115bn is a "kind" estimate and the expenditure is probably higher. But either way, they're running at a loss. And for a company like them, that loss is huge. Google could take the loss as could Microsoft or Amazon because they have lots of other revenue sources. OAI does not.

Google is embedding Gemini into Chrome Developer Tools. You can ask for an analysis of individual network calls in your browser by clicking a checkbox. That's just an example of the power of platform. They seem to be better at integration than Microsoft.

OpenAI has this amazing technology and a great app, but the company feels like some sort of financial engineering nightmare.


To be fair the CEO of OpenAI is also a crypto bro. Financial engineering is right up their wheelhouse.

We live in crazy times, but given what they’ve spent and committed to that’s a drop in the bucket relative to what they need to be pulling in. They’re history if they can’t pump up the revenue much much faster.

Given that we’re likely at peak AI hype at the moment they’re not well positioned at all to survive the coming “trough of disillusionment” that happens like clockwork on every hype cycle. Google, by comparison, is very well positioned to weather a coming storm.


Google survives because I still Google things, and the phone I'm typing this on is a Google product.

Whereas I haven't opened the ChatFPT bookmark in months and will probably delete it now that I think about it.


RIP privacy.

Hello Stasi Google and its full personalised file on XorNot.

Google knows when you're about to sneeze.


And a $115b burn rate. They're toast if they can't figure out how to stay on top.

Could say that about any AI company that isn’t at the top as well

You can say it about the AI companies, but Google or Microsoft are far from AI companies.

That's a good point. Google was sleeping on AI and wasn't able to come up with a product before OpenAI and they only scrambled to come out with something when OpenAi became all the rage. Big companies are hard to budge and move in a new direction.

Google and Microsoft have existing major money printing businesses to keep their AI business afloat and burn money for a while. That's how Microsoft broke into gaming (and then squandered it years later for unrelated incompetence)

OpenAI doesn't have that.


Every F500 CEO told their team "have an AI strategy ASAP".

In a year, when the economy might be in worse shape, they'll ask their team if the AI thing is working out.

What do you think happens to all the enterprise OpenAI contracts at that point? (Especially if the same tech layperson CEOs keep reading Forbes and hearing Scott Galloway dump on OpenAI and call the AI thing a "bubble"?)


> What do you think happens to all the enterprise OpenAI contracts at that point?

they will go to google if it wins the AI race.


I will change a few lines of code and use another AI model?

Are all of their sales their code gen model? And isn't there a lot of competition in the code gen space from Google and Anthropic?

I'd imagine they sold these to enterprise:

https://openai.com/business/

"ChatGPT for Business", sold per seat

"API Platform"

I could see the former getting canned if AI isn't adding value.

Developers can change the models they use frequently, especially with third party infrastructure like OpenRouter or FAL.


Yeah- given all top AI models are more and more generalists, as time goes on there is less and less reason to use one over another.

It’s really even easier than that. I already do all my work on AWS and use Bedrock that hosts every popular model and its own except for OpenAIs closed source models.

I have a reusable library that lets me choose between any of the models I choose to support or any new model in the same family that uses the same request format.

Every project I’ve done, it’s a simple matter of changing a config setting and choosing a different model.

If the model provider goes out of business, it’s not like the model is going to disappear from AWS the next day.


> Bedrock

This sounds so enterprise. I've been wanting to talk to people that actually use it.

Why use Bedrock instead of OpenRouter, Fal, etc.? Doesn't that tie you down to Amazon forever?

Isn't the API worse? Aren't the p95 latencies worse?

The costs higher?


Given a choice between being “locked in” to a major cloud provider and trusting your business to a randomish little company, you are never going to get a compliance department to go for the latter. “no one ever got fired for choosing AWS”.

This is the API - it’s basically the same for all supported languages

https://docs.aws.amazon.com/code-library/latest/ug/python_3_...

Real companies aren’t concerned about cost as much as working with other real companies, compliance, etc and are comparing cost or opportunities between doing a thing and not doing a thing.

One of my specialties is call centers. Every call deflected by using AI vs talking to a human agent can save from $5 - $15.

Even saving money by allowing your cheaper human agents to handle a problem where they are using AI in the background, can save money. $15 saved can buy a lot of inference.

And the lock in boogeyman is something only geeks care about. Migrations from one provider to another costs so much money at even a medium scale they are hardly ever worth it between the costs, distractions from doing value added work, and risks of regressions and downtime.


> And the lock in boogeyman is something only geeks care about. Migrations from one provider to another costs so much money...

You just gave the definition of lock in.


You are “locked in” to your infrastructure if you have a bunch of VMs at your colo and you need to move.

Do you also suggest that people never use a Colo?

I’ve seen it take a year to move a bunch of VMs from a Colo.


99% of people who use it do so because of A. existing agreements wrt compliance and billing (including credits, spend agreements etc.) B. IAM/org permissioning structures that they already have set up.

> Isn't the API worse

No, for general inference the norm is to use provider-agnostic libraries that paper over individual differences. And if you're doing non-standard stuff? Throw the APIs at Opus or something.

> Aren't the p95 latencies worse?

> The costs higher?

The costs for Anthropic models are the same, and the p95 latencies are not higher, they're more stable if anything. The open weights models do look a bit more expensive but as said many businesses don't pay sticker price for AWS spend or they find it worth it anyway.


Bedrock is a lot more than just a standard endpoint. Also, the security guarantees.

on less vendor to vet, one less contract to negotiate, one less 3rd party system to administer. you're already locked into AWS anyway. integrates with other AWS services. access control is already figured out.

Bedrock hosts Gemini models? Incredibly popular, currently SOTA, biggest competitor to OpenAI, those models? I don't think it does.

I forgot to mention that. But funny enough AWS and GCP made a joint announcement that they are introducing a service to let users easily connect the infrastructure of the two providers between their private networks without going over the public internet.

https://cloud.google.com/blog/products/networking/aws-and-go...

This isn’t some type of VPN solution, think more like DirectConnect but between AWS and GCP instead of AWS and your colo.

It’s posited that AWS agreed to this so sales could tell customers that they don’t have to move their workloads from AWs to take advantage of Google’s AI infrastructure without experiencing extreme latency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: