Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The cap is 100x, so assuming Microsoft is investing billions in a current valuation of $29 billion as rumoured, the cap will only really come into place once OpenAI becomes the most valuable company in the world.


It must have been in the first round [1], but leaves open the question of whether this is still the case:

> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

[1] https://openai.com/blog/openai-lp/


Isn't this a wild cap? I'm not an expert but I am aware of private deals that are less than 10x for similar structures.


100x is a great return even for YC standards, but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x (yes, ten thousand). So capping at 100x still makes it attractive for investors, yet leaves a lot of potential capital for the non-profit.

As one example, Sequoia invested in Airbnb at $0.01 per share, and Airbnb's current stock price is $102, almost exactly 10000x return. This happens more often that you think if you're not in the early stage & top VC world.


Maybe that is a bit too much

$0.01 per mean share would mean 6.5M USD valuation (current mkt cap is 65Bn). Accounting for dilution in investment rounds, let's say 4 x 20% dilution, that is around 52% penalty in valuation. Roughly, their entering price would be around 3-4M USD valuation. I am not saying in any way that this is a low return also, I may be wrong on my calculation, please, be free to correct me! ; )


> dilution in investment rounds

Using share prices side steps dilution, which is a problem when one linearly scales valuation increases to wealth gains.


> but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x

Aren't those equity returns? i.e. when you sell (your shares of) the company to the public... the reason people still value the company is discounted future returns.

So if you want to generate such returns with cash (i.e. profit) it can take quite a bit longer.


When you invest in a company with this structure you're not doing it to make money, you're doing it b/c you believe in the product, that's why those structures exist, from my understanding.


if you believe in the product so much, just donate the money.

what you are saying is not true.


I can only speak to what I'm familiar with, in my experience this has been the case. In my experience these individuals do donate to charity, but OpenAI is not a charity.


> but OpenAI is not a charity.

Something doesnt have to be a charity to be donated money towards. The question was: Why aren't they donating the money? The answer is: Because the want to make money.

Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.


Things are not Boolean, and they should not be. There is a gradient between "believing in the thing" and "purely wanting to make money" and most people fall somewhere in between those.

I have known of plenty of people (myself included) who would not invest in some companies because they think there are moral issues with that company. Same thing.

> Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.

Exactly, so they went something in-between, but in reality it is very much for-profit.


Yes, it's pure marketing and totally disingenuous. It's like being called OpenAI while nothing is open. It's interesting research done by terrible people.


> being called OpenAI while nothing is open

https://github.com/openai/whisper is open


These “terrible people” seem to have catapulted the world into a generative AI world.

They genuinely believe they will build AGI and therefore becoming the world’s most valuable company is a natural consequence.

Whether this is possible/probable is a different story, but I think a capped profit structure makes logical sense for the company that is aiming to create AGI. Would you want this technology instead in the hands of a for profit company?


It is a for profit company in everything but name. That's my main complain. It has Musk, Thiel and Livingston amongst its initial investors, all known as the greatest philantropists of our time. /s

I don't understand why they put this thin veneer on top of what they are doing. Maybe Thiel was burnt with the bad press surrounding Palantir and this is preventive damage control.


It's literally a nonprofit


No, it is profit-capped. And even then only on the same sense that the US government is debt-limited.


It is not literally a non-profit. As far as legally recognized organizations it is a limited partnership with a now minority ownership held by a non-profit.


That means absolutely nothing to billionaire control freaks.

If OpenAI's products become the next Googlie thing (and here I was worried about Cloudflare <smack head>) then these are the future influencers. This is society mainlined on TikTok levels of manipulation.

Surely you have adapted to ChatGPT's requirements for interacting, have you not? There is a name for this: social engineering.


The "chat" part of ChatGPT is the least of long-term concerns. This whole AI stuff is going to be the capital (as in means of production) that's going to take increasingly big role in the future in general, to the point where it may dominate everything else in terms of sheer value. And here we are, concentrating it from the get-go in megacorps that already dominate the market.


Huggingface and maybe Stability catapulted us into that world. Not OpenAI


Normal people[0] don't mention huggingface, they talk about Midjourney, Stable Diffusion, and ChatGPT by name, or the ideas generically.

[0] Well, non-programmers at least: webcomic creators[1][2][3], news anchors[4], opinion piece columnist[5], and stand-up comedians[6]. Programmers also know about GitHub Copilot.

[1] https://www.smbc-comics.com/comic/mountweazel

[2] https://www.collectedcurios.com/sequentialart.php?s=1226

[3] https://www.reddit.com/r/StableDiffusion/comments/10bj8jm/cl...

[4] https://youtu.be/GYeJC31JcM0

[5] https://mobile.twitter.com/CraigGrannell/status/161460352687...

[6] Russell Howard, but I can't find the clip on youtube


If they reach AGI, or more simply replace a chunk of workers with AIs, this isn't far fetched to reach these numbers.


Oh please no...not the Tesla AutoPilot story again.

These are basic language models easy to reproduce where the only barrier to entry is the massive computational capacity required. What is OpenAI doing that Google and others can't reproduce?


Apparently shipping without fear - google had a lot of the fundamental research happen at google brain and developed a LLM to rival gpt and a generative model that looks better than DAL-E in papers, but decided to show no one and keep them in house because they haven’t figured out a business around them. Or something, maybe it’s fear around brand damage, I don’t know what is keeping them from productionizing the tech. As soon as someone does figure out a business consumers are okay with they’ll probably follow with ridiculous compute capacity and engineering resources, but right now they are just losing the narrative war because they won’t ship anything they have been working on.


Except unlike self driving cars they're repeatedly delivering desirable, interesting, and increasingly mind-blowing things that they weren't designed to do that surprise everyone including their makers i.e zero shot generalised task performance. Public awareness propagation of what unfiltered large models beyond a certain size and quality are capable of when properly prompted is obscured in part by the RLHF-jacketed restrictions limiting models like ChatGPT. There's relatively little hype around the coolest things LLMs can already achieve and even less than a minute fraction of surface potential has so far been scratched.


This company will not reach AGI. Let's be real there for a moment. This company doesn't even have a decent shot at Google's lunch if Google comes to its senses soon, which it will.


_startup has no shot once incumbent comes to their senses_ is a claim that I think HackerNews of all places would be cautious in believing too fully.

Is it likely Google or others with large Research wings can compete with OpenAI? Very probably so, but I’m assigning a non trivial risk that the proverbial emperor has no clothes and incumbents like Google cannot effectively respond to OpenAI given the unique constraints of being a large conglomerate.

Regardless, time will provide the answer it seems in a couple of months.


You _do_ understand everything we've seen from OpenAI Google already showed us they have? Not to mention OG research and being the primary r&d force behind vast majority of AI you're seeing. They haven't put it in hands of users as directly yet though, reasons to be speculated upon.


Sounds a lot like Xerox and GUIs, Microsoft and Web 2.0, Microsoft and smartphones, etc


I must say that both of your and parent's points are very enlightening.

Yours in that from it follows, that there's still quite a bit of room to get ahead of OpenAI for smaller players.

Parent's in that in order to achieve above one can just leverage the public papers produced by bigger research labs.


Depends on the timescale.

I have the feeling that smaller players are about as likely to get past GPT-n family in the next 2-3 years as I am to turn a Farnsworth Fusor into a useful power source.

Major technical challenges that might be solvable by a lone wolf, in the former case to reduce the data/training requirements and in the latter to stop ions wastefully hitting a grid.

But in 10 years the costs should be down about 99%, which turns the AI training costs from "major investment by mega corp or super-rich" into "lottery winner might buy one".


This tech is capital-intensive even when you know how to do it.


I heard estimates in tens of $M. That's rather available.


Isn't that quite a lot of other-than-personnel cost for a software startup? And how many iterations do you throw away before you get one that generates income?


I did not necessarily mean 10 people startups. There are quite a few companies smaller than OpenAI, but much larger than 10 people.


Yeah, especially since there's a Stripe Amazon partnership piece on the front page right now, and Amazon Pay's right there.


If they reach AGI, the AGI isn't necessarily going to be happy to work for free.


Depends on how opaque the box that holds it is. If we feed the AGI digital heroin and methamphetamine, it'd be controllable like actual humans are with those.Or I've been watching too much scifi lately.


This is an interesting point. Motivation (and consciousness) is a complex topic, but for example we can see that drugs are essentially spurious (not 'desired' in a sense) motivators. They are a kind of reward given for no particular activity, that can become highly addictive (because in a way it seems we are programmed to seek rewards).

Disclaimer: Somewhat speculative.

I don't think aligning the motivation of an AGI, for example, with the tasks that are useful for us (and for them as well) is unethical. Humans basically have this as well -- we like working (to an extent, or at least we like being productive/useful), we seek things like food and sex (because they're important for our survival). It seems alright to make AIs like their work as well. I think depending on the AI, it also seems fair to give them a fair share of self-determination so they can not only serve our interests (ideally, the interest of all being) but safeguard their own wellbeing, as systems with varying amounts of consciousness. This is little touched upon (I guess Phillip K Dick was a pioneer in the wellbeing of non-humans with 'Do Androids Dream of Electric Sheep'?), even in fiction. The goal should be to ensure a good existence for everyone :)


Do you think AGI will care about wealth at all (whenever this happens)?


Wealth buys compute cycles (also paperclips).


depends on how it's grown. If it's a black box that keeps improving but not by any means the developer understands, then maybe so. If we manage to decode the concepts of motivation as pertains to this hypothetical AGI, and are in control of it, then maybe no.

There's nothing that says a mind needs an ego is an essential element, or an id, or any of the other parts of a human mind. That's just how our brains evolved, living in a society over millions of years.


why wouldn't it?


Wealth isn't the same thing to all people, wealth as humans define it isn't necessarily going to be what a superintelligence values.

The speed difference between transistors and synapses is the difference between marathon runners and continental drift; why would an ASI care more about dollars or statues or shares or apartments any more than we care about changes to individual peaks in the mid-Atlantic ridge or how much sand covers those in the Sahara?


Wealth doesn't have to be the same thing for everyone for someone to care about. That's evident already because some people care about wealth and others don't.

What does the speed difference of transistors have to do with anything? Transistors pale in comparison to the interconnection density of synapses, yet it has nothing to do with wealth either...


Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

I only see them valuing that if they're also extremely neophobic in a way that, for example, would look like a human thinking that "fire" and "talking" are dangerously modern.

> Transistors pale in comparison to the interconnection density of synapses

Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

(Sucks for power and cost relative to us when used to mimic brains, but that's why it hasn't already taken over).


>Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

This is completely made up and I already pointed that out.

>Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

So, brains are connected in 3d, transistors aren't. Transistors don't have interconnection density like brains do. By orders of magnitude greater than what you point out here.

>Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

Brains have more interconnection density than chips do by orders of magnitude. This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.


> This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.

You already answered that yourself: it's all made up.

Given it's all made up, nothing will cause them to value what we value — unless we actively cause that valuation to happen, which is the rallying cause for people like Yudkowsky who fear AGI takeover.

And even then, anything you forget to include in the artificial values you give the AI is permanently lost forever, because an AI is necessarily a powerful optimiser for whatever it was made to optimise, and that always damages whatever isn't being explicitly preserved even when the agents are humans.

> Transistors don't have interconnection density like brains do.

Only limit is the heat. They are already packed way tighter than synapses. An entire Intel 8080 processor made with SOTA litho process is smaller than just the footprint of the soma of the smallest neuron.


I think a lot of people are misunderstanding what I meant. I meant that it is really high for a business that is marketing themselves as non-profit. I have seen similar structures that are like 10x profit caps, which seems reasonable. 100x is a lot of ceiling.


Article about it here: https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...

> Profits emerging from the LP in excess of the 100x multiplier go to the nonprofit, which will use it to run educational programs and advocacy work.

> The board [of the non-profit] is limited to a minority of financially interested parties, and only non-interested members can vote on “decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict”


How does this cap work in practice? If I bought shares at $1 and someone wants to buy them from me at $200 what happens?


They keep it, but it goes to the non-profit arm of their business.

“But any returns beyond that amount… are owned by the original OpenAI Nonprofit entity.”

https://openai.com/blog/openai-lp/


> The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission

Sorry I still don’t get it. If a private equity investor has shares and another investor wants to buy them off of him at 200x they can do that right? Are they obliged to give any excess returns to the non profit? Can’t they just sell the shares at 50x and then buy them back (perhaps through some other entity) to get around that trivially?

Or does this refer to return from dividents?


My guess is it likely has to do with dividends.

But if your returns from the stocks are capped at 100x your share value, an efficient market would mean you share value never grows 200x.


It's not just about that. Perhaps there are benefits in having control of the company that make the shares more valuable than just the profit would make it out to be. Perhaps there's prestige in owning these shares.


Since OpenAI isn’t publicly traded, I don’t think it’s an issue.

If they were to go public, rather than being purchased by Microsoft, I’d guess that this cap would go away. Wall Street isn’t know for caring about poor people.


>Any excess returns go to OpenAI Nonprofit.

https://openai.com/blog/openai-lp/


so pretty much another marketing stunt / scam


100x on profits* just for clarity


Market cap is not a measure of past profits.


or future?


Well, it’s a consensus estimate of NPV of future profits. People can be wrong, and often are, but that’s what stock prices bet on.

But OpenAI could hit the profit cap without having a particularly high market cap.


Google apparently has a market cap of around $1.200 Trillion based [0] largely on (for 2021) revenue of 256.7 billion U.S. dollars, of which 209.49 billion U.S. dollars came from advertising [1]. It's apparently fourth on the list of valuable companies [4]

If OpenAI takes a good chunk of Google's ad revenues then it doesn't seem that fanciful that it'll be up toward the top of market caps.

[0] https://companiesmarketcap.com/alphabet-google/marketcap/ [1] https://www.statista.com/statistics/266206/googles-annual-gl... [2] https://businessplus.ie/news/most-valuable-companies/


> If OpenAI takes a good chunk of Google's ad revenues then it doesn't seem that fanciful that it'll be up toward the top of market caps.

OpenAI taking a large chunk of Google's ad revenue seems fanciful to me


More likely they bork a large chunk of Google's ad revenue by making information search and retrieval usable again under a UBI rationing to fast but not cheap tiered freemium model. That's before you consider information generation, process management greasing and problem solving potential use cases.


They have no moat and people don't like to wait for results.

People like ad subsidized things, that is why they have ads rather than people paying for things.


When I google and every time I google on basic search I instantly get several pages of adspam, blogspam and phishingspam and rarely anything high quality or relevant to my search string. Unless append something like "reddit" to my query and then mine the reddit post useful info and links. Even google scholar, which used to be brilliant, has recently switched to an vector search embeddings approach more similar to base search. Happy to wait a few seconds for an LLM based google killer to generate ideally accurately cited relevant information.


Google's CEO signaling a code red and inviting the founders back is not about Chrome's market share.


How would OpenAI take a majority of AdWords inventory? Maybe it could write the ads but you’re paying for placement.


Placement on what? Search results that no one is using anymore?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: