How do people feel about the value of the M3 Ultra vs. the M4 Max for general computing, assuming that you max out the RAM on the M4 version of the Studio?
The kinds of workloads that could truly leverage the M2 Ultra over the M2 Max were vanishingly small. When comparing the M3 Ultra to the M4 Max, that number gets even smaller, because the M4 Max will have ~15% higher single core perf. The insane memory available on M3 Ultra is its only interesting capability, but its still not big enough to run the series of largest open source LLMs.
Hot take: You can tie yourself into six knots trying to spin a yarn about why the M3 Ultra spec is super awesome for some AI use-case, meanwhile you could buy a Mac Mini and like 200 million GPT-4o tokens for the cost of this machine that can't even run R1.
I suspect most people running LLMs locally are unable to use the big cloud models for either legal or ethical reasons. If you could use gpt4, you would, it's just not that expensive.