Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because of the vesting milestones the stock price of AMD would go up by such an extent that creating more s hares would not dilute the share price.

Obviously, for the stock price to go up money needs to come from somewhere. It makes sense that this deal would lower the NVidia stock price, so technically it will be NVidia investors waiting too long to respond to this news that will be paying for this. A tax on the mistaken believe that NVidia has an monopoly on putting transitions in a particular configuration which they obviously don't. The rest is just momentum and this would kill that.

The real winners will be TSMC and ASML



> Obviously, for the stock price to go up money needs to come from somewhere.

Not convinced that’s true anymore in current climate. Bigger numbers announcements and AI Pixie dust works too apparently lol


I mean the potential value comes from the future either way.

If you just print money and nothing else, it inflates and becomes worthless affecting all involved.

If the money turns into technical progress or products then the entire economy grows.


> potential value comes from the future

In a strictly commercial sense yes but stock markets decoupled from that long ago. Whether it’s wallstreetbets up to shenanigans or a market crash it’s got little to do with actual future and more With sentiments. You’d hope it would revert to fundamentals eventually but markets sure seem happy to not do that


Sentiments about what ?

What is "actual future" ? Obviously we can only have feelings for it, not knowledge, right ?


The money actually has to be spent on real goods for which supply is inelastic for this to happen. If it's instead saved or used to pay taxes it won't cause any inflation.

I suppose the increased savings means there more potential for the private sector to cause inflation if everyone decides to dissave at once, but that's sorta a last resort.


You can keep inflating imaginary piles of money until someone tries to grab too much of it... Add in loaning against the valuations and you can keep doing it even longer...


> A tax on the mistaken believe that NVidia has an monopoly on putting transitions in a particular configuration which they obviously don't

NVIDIA doesn't place transistors in particular configurations. Foundries do that for them. And it is currently common sense that the software is the moat, not the hardware design.

Good luck changing the ecosystem to use AMD.


> that the software is the moat, not the hardware design.

For inference that’s hardly relevant, though?

For training its not exactly insurmountable either.


On huge GPU clusters running inferencing the utilization of GPUs is key.

Imagine you have 1 million GPUs and you have 99% utilization of theoretical performance in the system with inferencing. That would mean 10k of GPUs are basically idle and draw power. You could now try to identify which ones are idle but you won't find them because utilization is a dynamic process so while all GPUs are under load not all are running 100% performance beause of interconnects and networking not providing data fast enough so your whole network becomes a bottleneck.

So what you need is a very smart routing process of computation requirements on the whole cluster. This is pure SW issue and not HW issue. This is the SW Nvidia has been working on for years and where AMD is years behing.

This is also why Jensen is absolutely right to say that competitors can offer their chips for free because Nvidia's key in TCO performance is the idea of one giant GPU so SW and networking allowing for highest utilization of a data center. You can't build a GPU the size of 1 million GPUs so you have to think of the utilization problem of a network of GPUs.

In the real world utilization rates are way below 100% so every % better of utilization is way more worth than the price of single GPUs. The idea here is that the company providing 2-3x higher utilization can easily ask for like 5x higher pricing per chip and will still deliver a better TCO.


GPUs are also used to speed up inference (the math is virtually the same). You think your ChatGPT queries are running on x86 servers?


But do you think with the profit margins of NVidia, others won't be offering competing chips? Google already has their own for example.

From that perspective the notion that NVidia will own this AI future while others such as AMD and Intel standby, would be silly.

Im already surprised it took this long. The NVidia moat might he software, but not anything that warrants these kind of margins at this scale. It is likely there will be strong price competition on hardware for inference.


> You think your ChatGPT queries are running on x86 servers

What makes you think? Or are all non Nvidia GPUs x86?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: