Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the first compelling Mac to me. I've used Macs for a few clients and muscle memory is very deeply ingrained for linux desktops. But with local LLMs finally on the verge of usability along with sufficient memory... I might need to make the jump!

Wish I could spin up a Linux OS on the hardware though. Not a bright spot for me.



You totally can after a little bit of time waiting for M4 bringup!

https://asahilinux.org

It won't have all the niceties / hardware support of MacOS, but it seamlessly coexists with MacOS, can handle the GPU/CPU/RAM with no issues, and can provide you a good GNU/Linux environment.


Asahi doesn't work on M3 yet after a year. It's gonna be a bit before M4 support is here.


IIRC one of the major factors holding back M3 support was the lack of a M3 mini for use in their CI environment. Now that there's an M4 mini hopefully there aren't any obstacles to them adding M4 support


Why would that matter? You can use a MacBook in CI too?


How? What cloud providers offer it? MacStadium and AWS don't.

I guess you could have a physical MBP in your house and connect it to some bring-your-own-infrastructure CI setup, but most people wouldn't want to do that.


Cloud providers don't seem too relevant to a discussion of CI for kernel and driver development.


Why not?


How do you imagine that a cloud computing platform designed around running Macs with macOS would work for testing an entirely different OS running on bare metal on hardware that doesn't have a BMC, and usefully catching and logging frequent kernel panics and failed boots?

It's a pretty hard problem to partially automate for setups with an engineer in the room. It doesn't sound at all feasible for an unattended data center setup that's designed to host Xcode for compiling apps under macOS.


Getting M4 Mac Mini CI might be a while considering the amount they've changed the package. Too tall to go into a 1U, smaller so the rack frames need to be redesigned. Power button now on the bottom so switch actuation need finagling.


GitHub’s self hosted runners are as painless as they can get, and the Mac Mini in my basement is way faster than their hosted offering.


I meant using a physical device indeed.


"a little bit of time" is a bit disingenuous given that they haven't even started working on the M3.

(This isn't a dig on the Asahi project btw, I think it's great).


I miss Linux, it respected me in ways that MacOS doesn't. But maintaining a sane dev environment on linux when my co-workers on MacOS are committing bash scripts that call brew... I am glad that I gave up that fight. And yeah, the hardware sure is nice.


IIRC brew supports linux, but it isn't a package manager I pay attention to outside of some very basic needs. Way too much supply chain security domain to cover for it!


It does, but I prefer to keep project dependencies bound to that project rather than installing them at wider scope. So I guess it's not that I can't use Linux for work, but that I can't use Linux for work and have it my way. And if I can't have it my way anyway, then I guess Apple's way will suffice.


Off topic, but I’m very interested in local LLMs. Could you point me in the right direction, both hardware specs and models?


In general for local LLMs, the more memory the better. You will be able to fit larger models in RAM. The faster CPU will give you more tokens/second, but if you are just chatting with a human in the loop, most recent M series macs will be able to generate tokens faster than you can read them.


That also very much depends on model size. For 70B+ models, while the tok/s are still fast enough for realtime chat, it's not going to be generating faster than you can read it, even on Ultra with its insane memory bandwidth.



Thanks to both of you!


Have a look at ollama? I think there is a vscode extension to hook into local LLM if you are so inclined: https://ollama.com/blog/continue-code-assistant


Get as much RAM as you can stomach paying for.


macOS virtualization of Linux is very fast and flexible. Their sample code shows it's easy without any kind of service/application: https://developer.apple.com/documentation/virtualization/run...

However, it doesn't support snapshots for Linux, so you need to power down each session.


I've been lightly using ollama on the m1 max and 64gb RAM. Not a power user but enough for code completions.


You can spin up a Unix OS. =) It’s even older than Linux.


NextSTEP which macOS is ultimately based on is indeed older than Linux (first release was 1989). But why does that matter? The commenter presumably said "Linux" for a reason, i.e. they want to use Linux specifically, not any UNIX-like OS.


Sure. But not everybody. That’s how I ended up on a Mac. I needed to develop for Linux servers and that just sucked on my windows laptop (I hear it’s better now?). So after dual booting fedora on my laptop for several months I got a MacBook and I’ve never looked back.


BSD is fun (not counting MacOS in the set there), but no, my Unix experiences have been universally legacy hardware oversubscribed and undermaintained. Not my favorite place to spend any time.


Check out Asahi linux




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: