> In pursuit of our mission to ensure advanced AI benefits all of humanity, OpenAI remains a capped-profit company and is governed by the OpenAI non-profit. This structure allows us to raise the capital we need to fulfill our mission without sacrificing our core beliefs about broadly sharing benefits and the need to prioritize safety.
The cap is 100x, so assuming Microsoft is investing billions in a current valuation of $29 billion as rumoured, the cap will only really come into place once OpenAI becomes the most valuable company in the world.
It must have been in the first round [1], but leaves open the question of whether this is still the case:
> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.
100x is a great return even for YC standards, but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x (yes, ten thousand). So capping at 100x still makes it attractive for investors, yet leaves a lot of potential capital for the non-profit.
As one example, Sequoia invested in Airbnb at $0.01 per share, and Airbnb's current stock price is $102, almost exactly 10000x return. This happens more often that you think if you're not in the early stage & top VC world.
$0.01 per mean share would mean 6.5M USD valuation (current mkt cap is 65Bn). Accounting for dilution in investment rounds, let's say 4 x 20% dilution, that is around 52% penalty in valuation. Roughly, their entering price would be around 3-4M USD valuation. I am not saying in any way that this is a low return also, I may be wrong on my calculation, please, be free to correct me! ; )
> but the best returns that business angels, VCs and YC have had is in the order of magnitude of 10000x
Aren't those equity returns? i.e. when you sell (your shares of) the company to the public... the reason people still value the company is discounted future returns.
So if you want to generate such returns with cash (i.e. profit) it can take quite a bit longer.
When you invest in a company with this structure you're not doing it to make money, you're doing it b/c you believe in the product, that's why those structures exist, from my understanding.
I can only speak to what I'm familiar with, in my experience this has been the case. In my experience these individuals do donate to charity, but OpenAI is not a charity.
Something doesnt have to be a charity to be donated money towards. The question was: Why aren't they donating the money? The answer is: Because the want to make money.
Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.
Things are not Boolean, and they should not be. There is a gradient between "believing in the thing" and "purely wanting to make money" and most people fall somewhere in between those.
I have known of plenty of people (myself included) who would not invest in some companies because they think there are moral issues with that company. Same thing.
> Just FYI: OpenAI tried going the non-profit route, it didnt work, because suprise, suprise, in the grand scheme of VC things nobody wants to donate 10 billion dollars to anything.
Exactly, so they went something in-between, but in reality it is very much for-profit.
Yes, it's pure marketing and totally disingenuous. It's like being called OpenAI while nothing is open. It's interesting research done by terrible people.
These “terrible people” seem to have catapulted the world into a generative AI world.
They genuinely believe they will build AGI and therefore becoming the world’s most valuable company is a natural consequence.
Whether this is possible/probable is a different story, but I think a capped profit structure makes logical sense for the company that is aiming to create AGI. Would you want this technology instead in the hands of a for profit company?
It is a for profit company in everything but name. That's my main complain.
It has Musk, Thiel and Livingston amongst its initial investors, all known as the greatest philantropists of our time. /s
I don't understand why they put this thin veneer on top of what they are doing. Maybe Thiel was burnt with the bad press surrounding Palantir and this is preventive damage control.
It is not literally a non-profit. As far as legally recognized organizations it is a limited partnership with a now minority ownership held by a non-profit.
That means absolutely nothing to billionaire control freaks.
If OpenAI's products become the next Googlie thing (and here I was worried about Cloudflare <smack head>) then these are the future influencers. This is society mainlined on TikTok levels of manipulation.
Surely you have adapted to ChatGPT's requirements for interacting, have you not? There is a name for this: social engineering.
The "chat" part of ChatGPT is the least of long-term concerns. This whole AI stuff is going to be the capital (as in means of production) that's going to take increasingly big role in the future in general, to the point where it may dominate everything else in terms of sheer value. And here we are, concentrating it from the get-go in megacorps that already dominate the market.
Oh please no...not the Tesla AutoPilot story again.
These are basic language models easy to reproduce where the only barrier to entry is the massive computational capacity required. What is OpenAI doing that Google and others can't reproduce?
Apparently shipping without fear - google had a lot of the fundamental research happen at google brain and developed a LLM to rival gpt and a generative model that looks better than DAL-E in papers, but decided to show no one and keep them in house because they haven’t figured out a business around them. Or something, maybe it’s fear around brand damage, I don’t know what is keeping them from productionizing the tech. As soon as someone does figure out a business consumers are okay with they’ll probably follow with ridiculous compute capacity and engineering resources, but right now they are just losing the narrative war because they won’t ship anything they have been working on.
Except unlike self driving cars they're repeatedly delivering desirable, interesting, and increasingly mind-blowing things that they weren't designed to do that surprise everyone including their makers i.e zero shot generalised task performance. Public awareness propagation of what unfiltered large models beyond a certain size and quality are capable of when properly prompted is obscured in part by the RLHF-jacketed restrictions limiting models like ChatGPT. There's relatively little hype around the coolest things LLMs can already achieve and even less than a minute fraction of surface potential has so far been scratched.
This company will not reach AGI. Let's be real there for a moment. This company doesn't even have a decent shot at Google's lunch if Google comes to its senses soon, which it will.
_startup has no shot once incumbent comes to their senses_ is a claim that I think HackerNews of all places would be cautious in believing too fully.
Is it likely Google or others with large Research wings can compete with OpenAI? Very probably so, but I’m assigning a non trivial risk that the proverbial emperor has no clothes and incumbents like Google cannot effectively respond to OpenAI given the unique constraints of being a large conglomerate.
Regardless, time will provide the answer it seems in a couple of months.
You _do_ understand everything we've seen from OpenAI Google already showed us they have? Not to mention OG research and being the primary r&d force behind vast majority of AI you're seeing. They haven't put it in hands of users as directly yet though, reasons to be speculated upon.
I have the feeling that smaller players are about as likely to get past GPT-n family in the next 2-3 years as I am to turn a Farnsworth Fusor into a useful power source.
Major technical challenges that might be solvable by a lone wolf, in the former case to reduce the data/training requirements and in the latter to stop ions wastefully hitting a grid.
But in 10 years the costs should be down about 99%, which turns the AI training costs from "major investment by mega corp or super-rich" into "lottery winner might buy one".
Isn't that quite a lot of other-than-personnel cost for a software startup? And how many iterations do you throw away before you get one that generates income?
Depends on how opaque the box that holds it is. If we feed the AGI digital heroin and methamphetamine, it'd be controllable like actual humans are with those.Or I've been watching too much scifi lately.
This is an interesting point. Motivation (and consciousness) is a complex topic, but for example we can see that drugs are essentially spurious (not 'desired' in a sense) motivators. They are a kind of reward given for no particular activity, that can become highly addictive (because in a way it seems we are programmed to seek rewards).
Disclaimer: Somewhat speculative.
I don't think aligning the motivation of an AGI, for example, with the tasks that are useful for us (and for them as well) is unethical. Humans basically have this as well -- we like working (to an extent, or at least we like being productive/useful), we seek things like food and sex (because they're important for our survival). It seems alright to make AIs like their work as well. I think depending on the AI, it also seems fair to give them a fair share of self-determination so they can not only serve our interests (ideally, the interest of all being) but safeguard their own wellbeing, as systems with varying amounts of consciousness. This is little touched upon (I guess Phillip K Dick was a pioneer in the wellbeing of non-humans with 'Do Androids Dream of Electric Sheep'?), even in fiction. The goal should be to ensure a good existence for everyone :)
depends on how it's grown. If it's a black box that keeps improving but not by any means the developer understands, then maybe so. If we manage to decode the concepts of motivation as pertains to this hypothetical AGI, and are in control of it, then maybe no.
There's nothing that says a mind needs an ego is an essential element, or an id, or any of the other parts of a human mind. That's just how our brains evolved, living in a society over millions of years.
Wealth isn't the same thing to all people, wealth as humans define it isn't necessarily going to be what a superintelligence values.
The speed difference between transistors and synapses is the difference between marathon runners and continental drift; why would an ASI care more about dollars or statues or shares or apartments any more than we care about changes to individual peaks in the mid-Atlantic ridge or how much sand covers those in the Sahara?
Wealth doesn't have to be the same thing for everyone for someone to care about. That's evident already because some people care about wealth and others don't.
What does the speed difference of transistors have to do with anything? Transistors pale in comparison to the interconnection density of synapses, yet it has nothing to do with wealth either...
Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.
I only see them valuing that if they're also extremely neophobic in a way that, for example, would look like a human thinking that "fire" and "talking" are dangerously modern.
> Transistors pale in comparison to the interconnection density of synapses
Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.
Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.
(Sucks for power and cost relative to us when used to mimic brains, but that's why it hasn't already taken over).
>Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.
This is completely made up and I already pointed that out.
>Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.
So, brains are connected in 3d, transistors aren't. Transistors don't have interconnection density like brains do. By orders of magnitude greater than what you point out here.
>Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.
Brains have more interconnection density than chips do by orders of magnitude. This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.
> This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.
You already answered that yourself: it's all made up.
Given it's all made up, nothing will cause them to value what we value — unless we actively cause that valuation to happen, which is the rallying cause for people like Yudkowsky who fear AGI takeover.
And even then, anything you forget to include in the artificial values you give the AI is permanently lost forever, because an AI is necessarily a powerful optimiser for whatever it was made to optimise, and that always damages whatever isn't being explicitly preserved even when the agents are humans.
> Transistors don't have interconnection density like brains do.
Only limit is the heat. They are already packed way tighter than synapses. An entire Intel 8080 processor made with SOTA litho process is smaller than just the footprint of the soma of the smallest neuron.
I think a lot of people are misunderstanding what I meant. I meant that it is really high for a business that is marketing themselves as non-profit. I have seen similar structures that are like 10x profit caps, which seems reasonable. 100x is a lot of ceiling.
> Profits emerging from the LP in excess of the 100x multiplier go to the nonprofit, which will use it to run educational programs and advocacy work.
> The board [of the non-profit] is limited to a minority of financially interested parties, and only non-interested members can vote on “decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict”
> The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission
Sorry I still don’t get it. If a private equity investor has shares and another investor wants to buy them off of him at 200x they can do that right? Are they obliged to give any excess returns to the non profit? Can’t they just sell the shares at 50x and then buy them back (perhaps through some other entity) to get around that trivially?
It's not just about that. Perhaps there are benefits in having control of the company that make the shares more valuable than just the profit would make it out to be. Perhaps there's prestige in owning these shares.
Since OpenAI isn’t publicly traded, I don’t think it’s an issue.
If they were to go public, rather than being purchased by Microsoft, I’d guess that this cap would go away. Wall Street isn’t know for caring about poor people.
Google apparently has a market cap of around $1.200 Trillion based [0] largely on (for 2021) revenue of 256.7 billion U.S. dollars, of which 209.49 billion U.S. dollars came from advertising [1]. It's apparently fourth on the list of valuable companies [4]
If OpenAI takes a good chunk of Google's ad revenues then it doesn't seem that fanciful that it'll be up toward the top of market caps.
More likely they bork a large chunk of Google's ad revenue by making information search and retrieval usable again under a UBI rationing to fast but not cheap tiered freemium model. That's before you consider information generation, process management greasing and problem solving potential use cases.
When I google and every time I google on basic search I instantly get several pages of adspam, blogspam and phishingspam and rarely anything high quality or relevant to my search string. Unless append something like "reddit" to my query and then mine the reddit post useful info and links. Even google scholar, which used to be brilliant, has recently switched to an vector search embeddings approach more similar to base search. Happy to wait a few seconds for an LLM based google killer to generate ideally accurately cited relevant information.
I get that these things cost a huge amount of money and there's a "lot of opportunity" ( aka make money and influence ) and I don't have many problems with that, except when the signal vs scumbaggery becomes too much.
But what I really hate about this whole OpenAI thing is their chosen path to have their cake and eat it too. Sam Altman seems to be something like the love child of Musk and Zuckerberg and one of the main traits is their lack of honesty.
Satya Nadella is.. Satya Nadella, there's a reason he was chosen to be the CEO of Microsoft, and while I enjoy seeing the Google demi-gods squirm, this whole OpenAI non/capped/profit thing stinks and I really don't see anyone involved capable or having the character to be something better than the current tech oligarchy.
I take this in the same vein as "Patagonia Founder Donates Company to Charity" and view it as a clever shell game.Mostly because Im cynical and have watched SV/VC game way too long to be healthy.
Oh, I think it's way more of a marketing scam than the Patagonia thing, which I think was kind of legit. This one doesn't even sound legit, even if they do exactly what they say... which is almost nothing. they aren't even really saying they'll do anything different with regard to profit. Patagonia, I think the founder and his heirs really have given up lots of profit they could have had, immediately, to dedicate it to other causes. (I think?) Nobody's given up anything here.
1. If instead of donating the company he had left it to his kids he would have paid a lot in taxes (2:58)
2. He donated the voting shares of the company to a 501c4 that will remain controlled by his family and is allowed to lobby the government (4:10)
3. Normally when you make a donation you're giving up influence over what happens after that (4:25).
4. Other billionaires do other things (rest of video)
But #3 isn't actually true: any of us can donate to a donor-advised fund, which will let us later choose what charity we want the money to go to. This is a good idea if you want to donate but haven't decided where to donate yet, or want to fund opportunities that aren't available yet. They did it through new organizations instead of opening an account at Fidelity, but it's the same thing other than the scale. I wouldn't call your donations "not legit" for using a DAF.
Similarly, Adam sort of implies that #2 was tax-deductible, but donations to a 501c4 aren't. They had to pay tax on those shares based on their fair market value.
Overall, I don't see how this makes the donation no longer "legit" or "authentic"? By making the donation he has given up almost all of the benefit of having that money: he can't spend it for the benefit of himself or his descendants anymore. It can't buy them yachts, fancy houses, etc. Instead, they have to use the money to benefit others, which is why we give a tax break for it.
Yeah, ok. I mean I agree with his overall point that we shouldn't be like "oh hooray for the kind-hearted billionairres", there is something in it for them in how they have chosen to donate it of course. And they are still pretty darn wealthy already -- his kids probably still won't need to work, from existing already extracted profits.
But that does seem a lot more real than the OpenAI shenanigans, they have actually done something, and they have given up being even more fabulously wealthy than they are already, even if they still have direction over how the money is used, including lobbying -- both for climate change, but ok, let's say also for things that benefit them.
They've still done something, unlike the OpenAI thing which seems like giving up some future hypothetical probably wouldn't happen anyway profits, and making no difference at all for the foreseeable future -- no difference but PR advantage.
> Overall, I don't see how this makes the donation no longer "legit"?
It not necessarily not legit, but he managed to keep a 3B business under family control and bypass paying around 700M in taxes in doing so. So the altruistic messaging that it was donated to save the world is mildly two-faced. There is nothing wrong with it, but the news stories did leave out a few of the details. I only bring this up because the message about OpenAI being structured in such a way doesn't pass the smell test knowing SV/VC and the key players involved. Again I admit Im cynical and can very well be wrong, it also doesn't affect my life so why do I care, but I bring it up for conversation on HN because I feel its fair to discuss it and the possibilities. [1]
All I can say is that once you get to a certain level of wealth, money really is just a means to an end. The fact that they have less doesn't matter if they get their ends, one of which in this case is substantial influence over whatever organization they set up with this money.
If you want to develop a healthy counterpoint to that cynicism, you should consider reading more about Patagonia. Speaking as somebody who's generally cynical about these moves as well.
I'm less read-up on OpenAI. It does feel to me like they've diluted the original non-profit/openness mission to the point of it being an interesting historical quirk, rather than an ongoing, guiding focus.
> you should consider reading more about Patagonia
I'd be open to reading more. I think they are largely a good company but feel like this move was more of a tax dodge and to ensure generational wealth than for altruism. But I am open to being wrong about it.
But everyone involved in OpenAI doesn't give me warm and fuzzy feelings at all. I admit it's largely cynicism until I know more. But there isnt enough time in the day to do real research on every subject and topic that comes up on HN and every other discussion board I participate in, so it's difficult to be knowledgeable about everything and still maintain a life. And even if I was read up on OpenAI, I have zero ability to do anything about it, and it likely won't affect my life in a meaningful way as well (and this is true about most everything I read about to not single out OpenAI as not being worth my time). So it is a little pointless or more of a time waste I admit.
Company with decades of going way, way out of norms to operate as an ethical organization at virtually every level does an additional virtuous task: oh ya that’s a tax dodge
What do you think that proves? Yes, it does lower your tax bill to give away gains before they're realized. That's because you do not realize your gains. That's called simply "choosing to earn less money."
Tax dodge would imply the purpose is to reduce your tax bill and still see the upside. There's literally no evidence of that. All the upside he/his family sees will continue to be taxed at the normal rate.
If you're Yvon Chouinard, whose goal is to keep Patagonia going in perpetuity as a funding vehicle for environmental activism, what else could you do?
For the record: he divided up the shares into voting and non-voting. He "donated" the dividend-earning shares into a 501c4 foundation whose mission is to invest in grass-roots environmental activism, and "donated" the voting-power shares into a separate trust, whose objectives are to ensure that Patagonia continues on the path he gave as an example for the previous decades and to hold the 501c4 accountable.
The boards of these organizations are composed of the people whom he most trusts to fulfill his vision, a group of people that includes his children.
Knowing the full context of their lives, it's hard to see it as anything other than one of the more simple solutions to a complicated problem.
All this time, I was entirely unaware of this.