Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it crazy because you know they developed all this hardware with Linux. Just release the darn code.

I haven’t been at apple for over a year but there was a strong population using apples own arch Linux distro. It’s not like people there don’t use Linux.



Why would they develop their hardware in Linux?


Apple does bringup with Linux because the silicon team doesn’t necessarily want to deal with getting the Darwin team involved during validation.


Right - however it’s not as easy as “release the code.” Those Linux versions used for bring-up are typically an unupstreamable mess. They are hardware engineers, not software devs.

It’s kind of like Correlium’s M1 port over a year ago. It got Linux on the M1 within weeks of launch but the code was unusably atrocious.

Edit: Adding to that, Linux does not have a stable driver interface, and regularly changes between releases. That means that messy sloppy code that is unupstreamable will quickly be unable to move forward to future Linux versions without increasingly herculean efforts. You'll be trapped on an old kernel indefinitely if you follow that path - just like the Android situation.

Edit 2: And I'm not exaggerating that Correlium's code was sloppy. Almost none of it ended up in the Asahi Linux project. Also Corellium basically dumped that code for fun and then quickly gave up trying to keep it up to date.


Adding to that, Linux does not have a stable driver interface, and regularly changes between releases.

That's unfortunate, but it seems to be a bit of a common theme in open-source to say "you have the source, we don't care if we broke your code, fix it yourself". I suppose that might've been an attempt to discourage closed-source drivers, but...

You'll be trapped on an old kernel indefinitely if you follow that path - just like the Android situation.

...that didn't have the desired effect either.


If the drivers are upstreamed into the mainline kernel then they are updated in kind when the interface changes and Asahi intends to upstream as much as they can


Once things are upstreamed, anybody introducing interface-breaking changes is responsible for updating all the drivers.

It’s only a burden if your own fork, or want closed source bits. Neither of these are important goals for upstream.


What's the situation with Android?


New phones on old kernels, usually forever.


I would’ve thought the two teams worked very close together and had some skill overlap, so that’s surprising


There’s definitely overlap once the chip is finished and the software teams start writing software for it, but in early development you’re still trying it make sure the hardware isn’t broken so it doesn’t make sense to involve XNU when you can just test with Linux.


Wouldn't you also need to get Linux working first though? Like why would Linux be easier than XNU?


the short answer is linux follows standards, and is universal within the engineering teams responsible; far less effort to get working. big thing is that it's easier to "hire talent and hit ground running" in regards to progress


Is this assumption or known fact? I’d love to read more


hi, i can chime in and say this is a known fact. M1 is not OOTB IBM PC compatible (unlike almost all x86), which means no standard BIOS/UEFI firmware interface for bootloaders like grub etc. internally, there was some significant overhauls to the automated testing and silicon validation/quality control processes during development (well before pandemic) which resulted in a need for this. apple can validate their silicon, but foundries, factories and other third parties are typically not able to (without added cost, implementation, Apple intellectual property, etc), so developing this was cost beneficiary to apple and eases production, and is really just accordance with existing industry standards for CPU manufacturing. i am just surprised this was made public, the motive i do not know, but i hope this helps explain, it's my first comment on this site.


Asahi Linux isn't an Apple produced product. It's a third party reverse engineered product


correct, i forgot some context about what i meant "apple developing", apple allowed raw images to be used now, this is what i described being used in the cpu manufacturing process for testing units. this was not always an option. i meant this pseudo-feature being made public, not asahi which has been public and apple has been aware of for over a year

see here:

[1]: https://github.com/AsahiLinux/m1n1/commit/0d4fb00ceb8a14f083...

[2]: https://news.ycombinator.com/item?id=29591578


The raw image support has nothing to do with silicon bring-up. That is an option for a userspace tool running in macOS recovery mode. At that point you're well past silicon bring-up. I don't know why people keep conflating those two things... it doesn't make any sense whatsoever. Silicon bring-up would happen via iBoot on completely unlocked chips (no secureboot fuses).

The raw image mode was added for us. Apple has absolutely zero use for such an option.


Internally, Apple uses a totally different mechanism to boot Linux, with the silicon validation firmware stack (that runs OpenFirmware of all things) chainloaded from iBoot.


Ive also heard about linux early-bringup from former apple employees.

Obviously what I say isn't proof though


It also wouldn't surprise me at all that they do this given the sheer complexity of booting XNU on an Apple SoC. XNU _requires_ that it can talk to a bunch of hardware early in boot (which, for what its worth, is partially why booting XNU on non-Apple platforms is such a pain in the ass!), and unless you want to run the entire SoC on FPGAs for the _entire_ development process, you need a kernel that can make due with just a single lonely CPU core and nothing else. Linux is designed to run on everything, and XNU is designed to just run on an Apple SoC.


huh is this actually true?

Darwin already works on arm, as the iOS is darwin


>Why would they develop their hardware in Linux?

Because all large scale, big name, commercial HW, CAD and EDA tools are either Windows and/or Linux exclusive.

EDA vendors don't even bother with the tiny (non existent?) market share of MacOS in this space when their entire customer base has been exclusively *nix and Windows for several decades now.


Linux, or some Unix of the old times. Some EDA tools are still available on PowerPC (not sure about the state of Sparc).


Yeah, before X86 took the performance crown in the '90 most of EDA was done on mainframes or PowerPC workstations running UNIX, including SGI.


In professional environments, x86 really took over only in the mid-2000s, especially with the Opteron. Whether x86 ever took the performance crown is debatable, if you consider the big PowerPCs IBM still sells.


What does EDA stand for?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: