Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is everyone now believing that AGI is within reach? This scrambling to have a non profit based structure is odd to me. They clearly want to be a for profit company, is this the threat of Elon talking?


Every letter of "AGI" means different things to different people, and the thing as a whole sometimes means things not found in any of the letters.

We had what I, personally, would count as a "general-purpose AI" already with the original release of ChatGPT… but that made me realise that "generality" is a continuum not a boolean, as it definitely became more general-purpose with multiple modalities, sound and vision not just text, being added. And it's still not "done" yet: while it's more general across academic fields than any human, there's still plenty that most humans can do easily that these models can't — and not just counting letters, until recently they also couldn't (control a hand to) tie shoelaces*.

There's also the question of "what even is intelligence?", where for some questions it just matters what the capabilities are, and for other questions it matters how well it can learn from limited examples: where you have lots of examples, ChatGPT-type models can be economically transformative**; where you don't, the same models *really suck*.

(I've also seen loads of arguments about how much "artificial" counts, but this is more about if the origin of the training data makes them fundamentally unethical for copyright reasons).

* 2024, September 12, uses both transformer and diffusion models: https://deepmind.google/discover/blog/advances-in-robot-dext...

** the original OpenAI definition of AGI: "by which we mean highly autonomous systems that outperform humans at most economically valuable work" — found on https://openai.com/charter/ at time of writing


It’s a really idiosyncratic and very subtle, intelligent, calculated imperative called “want yacht.”


Ha actually like elon showed his peers recently, its "want countries" rather than "want yacht"


Want yacht before world realizes they've run out of ideas


Open AI has build tools internally that scale not quite infinitely but close enough and they seem to have reached above human performance on all tasks - at the cost of being more expensive than hiring a few thousand humans to do it.

I did work around this last year and there was no limit to how smart you could get a swarm of agents using different base models at the bottom end. This at the time was a completely open question. It's still the case that no one has build an interactive system that _really_ scales - even the startups and off the record conversations I've had with people in these companies say that they are still using python across a single data center.

AGI is now no longer a dream but a question of if we want to:

1). Start building nuclear power plants like it's 1950 and keep going like it's Fallout.

2). Wait and hope that Moore's law keeps applying to GPUs until the cost of something like o3 drops to something affordable, in both dollar terms and watts.


> Start building nuclear power plants like it's 1950 and keep going like it's Fallout

Nuclear has a (much) higher levelized cost of energy than solar and wind (even if you include a few hours of battery storage) in many or most parts of the world.

Nuclear has been stagnant for ~two decades. The world has about the same installed nuclear capacity in 2024 as it had in 2004. Not in percent (i.e. “market share”) but in absolute numbers.

If you want energy generation cheap and fast, invest in renewables.


And yet when data enters need power all day every day nuclear is the only solution. Even Bill Gates stop selling solar when it wasn't for the poors who probably don't need hot water every day anyway.


As long as you can buy energy, you should choose the cheapest source (Levelized Cost of Energy). Which is renewables in most places.

I don’t think blackouts are very common for grid connected data centers :)?


What is the price of sunlight at midnight?


Most people do not buy from specific sources of production. They buy “from the grid”, the constituents of which are a dynamic mix of production sources (solar, wind, hydro, nuclear and fossile where I live).

Wind is strong at night when solar produces nothing. Same in the winter months.

As I said: if the power consumer is grid connected, this does not matter. Example: I have power in the socket even at night time :)

As long as you have uninterrupted power (i.e. as long as connected to the grid), the important metric is mean cost of energy, not capacity factor of the production plant.

For a nuclear sub or a space ship, which is not grid connected, capacity factor is very important. But data centers are usually grid connected.

> when data enters need power all day every day nuclear is the only solution

Do you think data centers running at night are running exclusively on nuclear-generated power :)?

We already have lots of data centers that need power all day every day. Most are just grid connected. It works.


Varies by latitude, season, and spanning length and capacity of intra grid connecting HVDC cables


We don't have AGI until there's code you can plug into a robot and then trust it to watch your kids for you. (This isn't an arbitrary bar, childcare is a huge percentage of labor hours.)


Not that I necessarily disagree on the conclusion, but why should percentage of labor hours constitute a measure for general intelligence?


AGI isn't until it meets some arbitrary criteria you made up. When it does it's the next arbitrary criteria that you just made up.


It is quite impressive to test a model on all human tasks in order to know this.

Especially if it takes so much compute to do any one task.

Sounds legit.


If AGI were in reach, why would something so human as money matter to these people? The choice to transition to a more pocket-lining structure is surely a vote of no-confidence in reaching AGI anytime soon.


I think it's entirely a legal dodge to pretend that they aren't gutting the non-profit mission.


I believe e.g. Ilya Sutskever believed AGI is in reach at the founding, and was in earnest about the reasons for the nonprofit. AFAICT the founders who still think that way all left.

It's not that the remainder want nonprofit ownership, it's that they can't legally just jettison it, they need a story how altering the deal is good actually.


The more I look the more I think it's ever so more out of reach and if there's a chance at it, OpenAI doesn't seem to be the one that will deliver it.

To extrapolate, (of LLMs and GenAI) the more I see use of and how it's used the more it shows severe upwards limits, even though the introduction of those tools has been phenomenal.

On business side, OpenAI lost key personnel and seemingly the plot as well.

I think we've all been drinking a bit too much on the hype of it all. It'll al;l settle down into wonderful set of (new) tools, but not on AGI. Few more (AI) winters down the road, maybe..


The only proof is in benchmarks and carefully selected demos. What we have is enough AI to do some interesting things, and that's good enough for now. AGI is a fuzzy goal that keeps the AI companies working at an incredible pace.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: