Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this for Mac's with NVIDIA cards in them or Apple Metal/Apple Silicon speaking CUDA?... I can't really tell.

Edit: looks like it's "write once, use everywhere". Write MLX, run it on Linux CUDA, and Apple Silicon/Metal.





Seems you already found the answer.

I’ll note Apple hasn’t shipped an Nvidia card in a very very long time. Even on the Mac pros before Apple Silicon they only ever sold AMD cards.

My understanding from rumors is that they had a falling out over the problems with the dual GPU MacBook Pros and the quality of drivers.

I have no idea if sticking one in on the PCI bus let you use it for AI stuff though.


That particular MBP model had a high rate of GPU failure because it ran too hot.

I imagined the convo between Steve Jobs and Jensen Huang went like this:

S: your GPU is shit

J: your thermal design is shit

S: f u

J: f u too

Apple is the kind of company that hold a grudge for a very long time, their relationships with suppliers are very one way, their way or the highway.


The MBPs didn’t run too hot, the Nvidia GPUs used an underfill that stopped providing structural support at a relatively normal temperature for GPUs (60-80 degrees C).

GPU failures due to this also happened on Dell/HP/Sony laptops, some desktop models, as well as early models of the PS3.

Some reading: https://www.badcaps.net/forum/troubleshooting-hardware-devic...


And so is the same with nvidea too

Are you watching the bear ?

I think the ones that failed were the AMD ones, specifically the old 17 inches MacBook Pro.

They were Nvidia failures due to a manufacturing defect. My 15” 2008 Nvidia 8600-equipped MBP was repaired out of warranty (for free) for this issue.

All MacBook Pros from late 2007 to 2010 used Nvidia GPUs, not AMD.

Search “nvidia 8600 fail” to read more.


I had 15” MBP, maybe a 2010, that was dual GPU with an Nvidia that was definitely a problem.

D700s dying in the trash can Mac Pros cost me (and many others) a lot of time and money.

S: omg so thin!!1!1!!l!

Won’t work. No driver support.

On Apple Silicon, writing to memory on a PCIe / Thunderbolt device will generate an exception. ARM spec says you're allowed to write to devices as if they were memory but Apple enforces that all writes to external devices go through a device memory mapping[0]. This makes using an external GPU on Apple Silicon[1] way more of a pain in the ass, if not impossible. AFAIK nobody's managed to write an eGPU driver for Apple Silicon, even with Asahi.

[0] https://developer.arm.com/documentation/102376/0200/Device-m...

[1] Raspberry Pi 4's PCIe has the same problem AFAIK


Ewww, that kills out of order CPU performance. If it's like ARMv7, it effectively turns each same-page access into it's own ordering barrier.

Writing to device memory does not generate an exception.

> "write once, use everywhere"

So my MLX workloads can soon be offloaded to the cloud!?


This is the only strategy humble me can see working for CUDA in MLX

This is the right answer. Local models will be accelerated by Apple private cloud.

Neither, it is for Linux computers with NVIDIA cards



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: