Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the uninitiated, Strix Halo is the same as the AMD Ryzen AI Max+ 395 which will be in the Framework Desktop and is starting to show up in some mini PCs as well.

The memory bandwidth on that thing is 200GB/s. That's great compared to most other consumer-level x86 platforms, but quite far off of an Nvidia GPU (a 5090 has 1792GB/s, dunno about the pro level cards) or even Apple's best (M3 Ultra has 800GB/s).

It certainly seems like a great value. But for memory bandwidth intensive applications like LLMs, it is just barely entering the realm of "good enough".





You're comparing theoretical maximum memory bandwidth. It's not enough to only look at memory bandwidth because you're a lot more likely to be compute limited when you have a lot of memory bandwidth available. For example, M1 had so much bandwidth available that it couldn't make use of even when fully loaded.

Memory bandwidth puts an upper limit on LLM tokens per second.

At 200GB/s, that upper limit is not very high at all. So it doesn't really matter if the compute is there or not.


The M1 Max's GPU can only make use of about 90GB/s out of the 400GB/s they advertise/support. If the AMD chip can make better use of its 200GB/s then, as you say, it will manage to have better LLM tokens per second. You can't just look at what has the wider/faster memory bus.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...


This mainly shows that you need to watch out when it comes to unified architectures. The sticker bandwidth might not be what you can get for GPU-only workloads. Fair point. Duly noted.

But my overarching point still stands: LLM inference needs memory bandwidth, and 200GB/s is not very much (especially for the higher ram variants).

If the M1 Max is actually 90GBs that just means it's a poor choice for LLM inference.


GPUs have both the bandwidth and the compute. During token generation, no compute is needed. But both Apple silicon and Strix Halo fall on their face during prompt ingestion, due to lack of compute.

Compute (and lots of it) is absolutely needed for generation - 10s of billions of FLOPs per token on the smaller models (7B) alone - with computations of the larger models scaling proportionally.

Each token requires a forward pass through all transformer layers, involving large matrix multiplications at every step, followed by a final projection to the vocabulary.


Obviously I don't mean literally zero compute. The amount of compute needed scales with the number of parameters, but I have yet to use a model that has so many parameters that token generation becomes compute bound. (Up to 104B for dense models.) During token generation most of the time is spent idle waiting for weights to transfer from memory. The processor is bored out of its mind waiting for more data. Memory bandwidth is the bottleneck.

It sounds like you aren’t batching efficiently if you are being bound by memory bandwidth.

That’s right, in the context of Apple silicon and Halo Strix, these use cases don’t involve much batching.

Apple is just being stupid, handicapping their own hardware so they can sell the fixed one next year or the year after

This is time tested Apple strategy is now undermining their AI strategy and potential competitiveness

tl;dr they could have done 1600GB/s


So their products are so much better, in customer demand terms that they don’t need to rush tech out the door?

Whatever story you want to create, if customers are happy year after year then Apple is serving them well.

Maybe not with same feature dimension balance you want, or other artificial/wishful balances you might make up for them.

(When Apple drops the ball it is usually painful, painfully obvious and most often a result of a deliberate and transparent priority tradeoff. No secret switcherooos or sneaky downgrading. See: Mac Pro for years…)


Apple is absolutely fumbling on their AI strategy despite their vertical hardware integration, there is no strategy. Its a known problem inside Apple, not a 4-D chess thing to wow everyone with a refined version in 2030

They could have shipped a B200 too. Obviously there are reasons they don't do that.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: