Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is exciting. So this is using unified memory of CUDA? I wonder how well that works. Is the behavior of the unified memory in CUDA actually the same as for Apple silicon? For Apple silicon, as I understand, the memory is anyway shared between GPU and CPU. But for CUDA, this is not the case. So when you have some tensor on CPU, how will it end up on GPU then? This needs a copy somehow. Or is this all hidden by CUDA?




In the absence of hardware unified memory, CUDA will automatically copy data between CPU/GPU when there are page faults.

There is also NVLink c2c support between Nvidia's CPUs and GPUs that doesn't require any copy, CPUs and GPUs directly access each other's memory over a coherent bus. IIRC, they have 4 CPU + 4 GPU servers already available.

Yeah NCCL is a whole world and it's not even the only thing involved, but IIRC that's the difference between 8xH100 PCI and 8xH100 SXM2.

This seems like it would be slow…

Matches my experience. It’s memory stalls all over the place, aggravated (on 12.3 at least) there wasn’t even a prefetcher.


CUDA's Unified Memory uses page migration with on-demand faulting to create the illusion of shared memory, whereas Apple Silicon has true shared physical memory, resulting in different performance characteristics despite the similar programming model.

This is my guess, but does higher end hardware they sell, like the server rack stuff for AI, perhaps have the unified memory?

I know standard GPUs don’t.

The patch suggested one of the reasons for it was to make it easy to develop on a Mac and run on a super computer. So the hardware with the unified memory might be in that class.



They sure do and it's pretty amazing. One iteration of a vision system I worked on got frames from a camera over a Mellanox NIC that supports RDMA (Rivermax), preprocessed the images using CUDA, did inference on them with TensorRT, and the first time a single byte of the inference pipeline hit the CPU itself was when we were consuming the output.

The physical memory is not be unified, but on modern rack scale Nvidia systems, like Grace Hopper or NVL72, the CPU and the GPU(s) share the same virtual address space and have non-uniform memory access to each other's memory.

Standard GPUs absolutely do. Since CUDA 11, all CUDA cards expose the same featureset at differing speeds (based on backing capability). You can absolutely (try to) run CUDA UMA on your 2060, and it will complete the computation.

The servers don't, but the Jetsons do



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: