It's a bit early (obviously, hence Alpha) but if the pace and purpose of the project make you excited consider bumping a few dollars towards the donation links in the article to help these folks afford to get it done. Marcan is a wizard and Povik has been making great progress but these machines aren't cheap nor their time unlimited!
Backing as well, soon enough it will be impossible to find a better machine than a Macbook Air running Linux at this price point!
The processor is outstanding, so is battery life and the screen and touchpad are incredibly better than any Dell or Lenovo out there.
Give us the opportunity to get rid of the increasing number of "cruft" that is Photos, Messages, iCloud, and countless demons in macOS... are we are good to go!
> so is battery life and the screen and touchpad are incredibly better than any Dell or Lenovo out there.
What? The touchpad I agree, but screen? For the Macbook Air (not pro)? That is still good, but definitely not better than the UHD+ screens you for instance get from Dell. And that is not even considering oled options.
Dell frequently does discounts, but the base price for their UHD panel in the XPS13 is twice the price of a MacBook Air. I had both, and still prefer the Mac hardware by a considerable margin.
I've got a Dell with 4k screen and it's great, but I would eagerly trade it for the M1's superior battery and noiseless cooling. My Dell doesn't last long or stay quiet if you try to do anything, even on windows.
Kudos to the people involved in this on their own time and the people donating to them supportive. Its not so nice that a trillion valued company needs charity to run an experiment which might prove useful to them technically.
Marcan has been working on this full time btw! Seems like he was able to fund this work entirely through sponsors like me through patreon and github sponsors
I just installed this - the installation experience was easy and guided you through the tasks that were happening and why. I gave it 100GB of space on my 16GB Mac Mini, and Asahi runs incredibly fast and smooth. Very impressed!
that’s great! I’ve been a long time Fedora user but last year I decided to try NixOS and I couldn’t be more surprised, everything works well and it was easy to install and learn. I’m glad I can try it in my m1 macbook pro!
Probably won't happen, but it's interesting that all the things you would need for an "x-serve next-gen" to be successful have fallen into place now. Would make a pretty power packed and efficient 1U machine.
Server on the M1 Mac Mini is really already there unless you need thunderbolt accessories or GPU acceleration for your server use case. Even the 10G port is working fine now. I was actually just looking into some of the 1u 2x Mac Mini mount options prior to reading this wondering if it'd make sense to do.
The Mac Studio bring-up might be worth waiting for on this use case though, apart from more cores it crucially has more RAM options. While the SSD is comparatively fast to swap from what most are used to dealing with the 16 GB of RAM limit on the Mini can still be between a real downer to a non-starter for many server workloads.
I’ve been wanting to set up a couple of small home servers (something like some raspberry pis). They’ve been out of stock since forever and very hard to come across. It’s been months since this project is paused.
I’m considering just getting a Mac Mini and using that as a home server with everything in it.
Let's be honest, Raspberry Pi is garbage. It's just insanely weak, I regret spending around €100 for mine (Raspberry Pi 4, with case). It's got 32-bit ARM chip, and USB 3 ports that you can't use (I plugged in 2 external HDDs into it, was running a SMB share for Time Machine and Media, they just kept dropping. Apparently something power delivery related, works with USB 2.0 speeds though.)
Exciting news! While I am mainly a Mac user, Linux is the other operation system I use (usually via VM on the Mac) and the one alternative operating system I would use, if I would not have a Mac. Being able to boot into Linux directly adds a lot of value. While I don't have an ARM Mac yet, this gives even more incentive to order a Mac Studio :) (I assume support for that is coming soon).
I am wondering why Apple doesn't support this more directly. That would be just a tiny drop out of the marketing budget. But at least there seems to be some good will in the relevant OS department with recent changes from which Asahi Linux benefited.
On a side note, as long as the graphics are CPU rendered, is this rendering multi-threaded and would benefit from the beefier M chips?
I agree with you that it would be good if Apple supported Linux containers directly, like ChromeOS and Windows. I bought a $350 Lenovo Chromebook last year and in addition to touchscreen and included keyboard case and pencil, it also has seamless integration for using Linux containers. I don’t own a Windows machine, but I have heard good reports about their Linux container system.
All that said, I would be delighted if Apple supported an M1 version of VirtualBox.
A super interesting angle with all this is that since these machines are so homogenous and widely-used, once the ice is broken they could quickly become the best-supported modern Linux machines out there, just due to the sheer number of people who would have the exact same hardware (vs the rest of the landscape, which is much more spotty and varied), and the way that would reduce the support burden. Getting the audio drivers working on Apple Silicon machines could be much more impactful than getting them working on the MSI Delta 15 or whatever
Even more so: all Macs have pretty much the same hardware. From the Air to the Studio, the same CPU cores, the same GPU cores, same auxiliary hardware like neural net processors. All enhancements should pretty much affect all Macs running Linux.
At least for now, I think the performance gap between AS and nearly everything else available could be significant enough to move users (and developer hours) to Apple hardware where they might never have considered it before. There's a big difference between "nice build quality that takes some extra work to support" and "workflow-changing performance that takes some extra work to support"
But that matters for Apple's success, not Linux's.
If you are moving from a PC laptop to a Mac you will more likely switch to MacOS than to Linux, and if you were already on Linux before, it doesn't matter for its success that you changed machine.
One big issue with Linux is driver maintenance. If a distro can focus on just one set of hardware (e.g. M1 Macs), it allows the developers to optimize the distro massively. It also makes everything more predictable, and saves a lot of time.
Not to mention that the M1 macbooks have outstanding displays, speakers, mics, etc. Imagine a world where the distro has perfect drivers out of the box, and we don't need to use some subpar nouveau driver (when I say subpar, I'm speaking relative to the official drivers).
Not to mention that most people run Linux on crappy $1000 laptops. An M1 air is around that price range, and destroys anything in that particular market space.
This could be massive for Linux desktops. I feel like the one thing that has been holding it back has been this very issue. It even opens up the door to reverse engineering the Apple hardware and perhaps creating a complete open source integrated system in the future.
It matters, and it could be the thing that rocket blasts Linux into more normies homes.
>Allow me to explain how you are wrong. [..] I feel like the one thing that has been holding it back has been this very issue.
Allow me to explain how you're wrong. How often have you heard a normie Average Joe that's tech illiterate and just buys Macs or Walmart PCs and doesn't even know that their machine runs MacOS or Windows or that their iPhone runs iOS or that their Samsung phone runs Android (basically your average boomer dad), say "Gee wizz, I would totally wish I could run Linux on some decent HW, if only Linux driver support for Macs were better". lol
You're delusional if you think Linux market share on laptops/desktops will explode when the driver situation improves. 99,99% of people who will buy Macs will keep using MacOS. Most people buy into the Apple ecosystem for the ecosystem, and apart from some geeks/tech workers, they're not gonna start installing Linux on them.
My point is, you're massively overestimating how many Mac users (outside of geek tech bubbles) would actually daily drive Linux on them.
And even in the geek/tech bubbles, most will probably play with Linux on their M1 Macs for a day, and once they see that the trackpad movement is worse than MacOS and there's some screen taring due to some driver/wayland issue, they'll reboot and revert back to daily driving MacOS. Rinse and repeat in an year when it will finally be "year of the Linux".
The "year of Linux" on the desktop/laptop hasn't yet happened not because people couldn't run Linux on their Macs but because of countless of other factors, the biggest of which being that the average consumer doesn't know, want or care to install another OS than the one that came with their machine out of the factory.
Linux gets installed on Mac hardware when it's no longer supported by Mac OS. This can extend the lifetime and value of the hardware well beyond the value of a "stock" OS install.
Yeah, but my point was how many users of Macs actually install Linux o their Macs?
Is it a significant share of the MacOS market making this switch, or is it basically just a rounding error in the grand scheme of things?
And to correct you, Linux GOT installed on Macs when they had standard PC HW so Linux kernels had native support for them, but in the current state of Linux on the M1, there's so many things not working[1] due to Apple not using PC HW anymore, that I doubt Linux will be suitable for daily driving on M1/ARM Macs anywhere near it was on X86 Macs anytime soon. And I doubt Mac users will put up with Linux on their M1 machines if a lot of peripherals that worked on MacOS, don't work well on Linux.
Currently running Big Sur on a 2014 MacBook Pro.
I have a hard time imagining installing Linux on a machine that is over 7 years old (currently macOS backwards compatibility is about 7 years backwards, depending a bit on your model).
I mean ok some people will do that but at that point the machine starts to either obsolete or there is something wrong with it already. Why not just run a macOS that is couple of versions older, than install some Linux that probably can't even sleep properly..
I installed Ubuntu 20.04 onto a 2010 Mac Mini that can only run MacOS Mavericks at best. It crashes less with Ubuntu and meets my home server needs better than it ever could running MacOS.
Lenovo ThinkPads actually have manufacturer support for Linux for quite a long time. (to the degree that there is actually a BIOS menu option for Linux sleep states specifically labeled as such) I've used ThinkPads for the last 10 years and had perfect support running Linux Mint.
Don't get me wrong, it will be nice to have another manufacturer to choose from with excellent Linux support, but as it sounds, they aren't there yet with this new distro. So... I'll keep using ThinkPads for now.
Also I despise glossy screens. So I'll probably never want a macbook anyway.
Distro devs don't maintain drivers, kernel devs do that. That's part of why embedded firmware authors like Linux: you get a ton of drivers for free with minimal effort.
I made a quick guide to install and configure Sway on the Macbook Air. Should be useful to solve some touchpad and scaling issues with the defaul sway settings.
https://github.com/jaime10a/SwayM1
It's probably better to swap the command and option keys via the swap_opt_cmd=1 parameter on the hid-apple kernel module that way they are globally swapped, not just in sway. fnmode=2 is another useful one which sets f1-f10 to be function keys by default so e.g. pressing f10 sends f10, not mute because you forgot to press fn+f10.
Wrote this reply from Asahi which I installed earlier this afternoon before the announcement. Works well and the installer was pretty good for alpha. :)
I find it crazy because you know they developed all this hardware with Linux. Just release the darn code.
I haven’t been at apple for over a year but there was a strong population using apples own arch Linux distro. It’s not like people there don’t use Linux.
Right - however it’s not as easy as “release the code.” Those Linux versions used for bring-up are typically an unupstreamable mess. They are hardware engineers, not software devs.
It’s kind of like Correlium’s M1 port over a year ago. It got Linux on the M1 within weeks of launch but the code was unusably atrocious.
Edit: Adding to that, Linux does not have a stable driver interface, and regularly changes between releases. That means that messy sloppy code that is unupstreamable will quickly be unable to move forward to future Linux versions without increasingly herculean efforts. You'll be trapped on an old kernel indefinitely if you follow that path - just like the Android situation.
Edit 2: And I'm not exaggerating that Correlium's code was sloppy. Almost none of it ended up in the Asahi Linux project. Also Corellium basically dumped that code for fun and then quickly gave up trying to keep it up to date.
Adding to that, Linux does not have a stable driver interface, and regularly changes between releases.
That's unfortunate, but it seems to be a bit of a common theme in open-source to say "you have the source, we don't care if we broke your code, fix it yourself". I suppose that might've been an attempt to discourage closed-source drivers, but...
You'll be trapped on an old kernel indefinitely if you follow that path - just like the Android situation.
If the drivers are upstreamed into the mainline kernel then they are updated in kind when the interface changes and Asahi intends to upstream as much as they can
There’s definitely overlap once the chip is finished and the software teams start writing software for it, but in early development you’re still trying it make sure the hardware isn’t broken so it doesn’t make sense to involve XNU when you can just test with Linux.
the short answer is linux follows standards, and is universal within the engineering teams responsible; far less effort to get working. big thing is that it's easier to "hire talent and hit ground running" in regards to progress
hi, i can chime in and say this is a known fact. M1 is not OOTB IBM PC compatible (unlike almost all x86), which means no standard BIOS/UEFI firmware interface for bootloaders like grub etc. internally, there was some significant overhauls to the automated testing and silicon validation/quality control processes during development (well before pandemic) which resulted in a need for this. apple can validate their silicon, but foundries, factories and other third parties are typically not able to (without added cost, implementation, Apple intellectual property, etc), so developing this was cost beneficiary to apple and eases production, and is really just accordance with existing industry standards for CPU manufacturing. i am just surprised this was made public, the motive i do not know, but i hope this helps explain, it's my first comment on this site.
correct, i forgot some context about what i meant "apple developing", apple allowed raw images to be used now, this is what i described being used in the cpu manufacturing process for testing units. this was not always an option. i meant this pseudo-feature being made public, not asahi which has been public and apple has been aware of for over a year
The raw image support has nothing to do with silicon bring-up. That is an option for a userspace tool running in macOS recovery mode. At that point you're well past silicon bring-up. I don't know why people keep conflating those two things... it doesn't make any sense whatsoever. Silicon bring-up would happen via iBoot on completely unlocked chips (no secureboot fuses).
The raw image mode was added for us. Apple has absolutely zero use for such an option.
Internally, Apple uses a totally different mechanism to boot Linux, with the silicon validation firmware stack (that runs OpenFirmware of all things) chainloaded from iBoot.
It also wouldn't surprise me at all that they do this given the sheer complexity of booting XNU on an Apple SoC. XNU _requires_ that it can talk to a bunch of hardware early in boot (which, for what its worth, is partially why booting XNU on non-Apple platforms is such a pain in the ass!), and unless you want to run the entire SoC on FPGAs for the _entire_ development process, you need a kernel that can make due with just a single lonely CPU core and nothing else. Linux is designed to run on everything, and XNU is designed to just run on an Apple SoC.
Because all large scale, big name, commercial HW, CAD and EDA tools are either Windows and/or Linux exclusive.
EDA vendors don't even bother with the tiny (non existent?) market share of MacOS in this space when their entire customer base has been exclusively *nix and Windows for several decades now.
In professional environments, x86 really took over only in the mid-2000s, especially with the Opteron. Whether x86 ever took the performance crown is debatable, if you consider the big PowerPCs IBM still sells.
> At least 53GB of free disk space (Desktop install)
Wowsa. I know disk space is abundant but that is still shocking. I've never used MacOS, but why does it need at least 38 GB free for MacOS update?
> You need 15GB for Asahi Linux Desktop
Wowsa number 2. I'm used light installs using lightweight windows managers or desktops so that number was a bit shocking. Anyone else prefer a small base onto which you can add software rather than getting everything and having to uninstall a bunch of software?
Asahi Linux Desktop is a default install. A minimal Arch install is also available, as well as a UEFI-only implementation. There is a fixed ~3GB overhead per 'Other OS' install for OS-specific vendor firmware, but a UEFI install is managed as a single 'Other OS' and can freely boot UEFI payloads from both internal and external drives.
The macOS Software Updater is extremely inefficient with space - you want plenty free or it might get stuck.
As for the 15GB, 2.5GB must be used for the mandatory Recovery partition as well as leaving room for the recovery partition to have extra room if necessary, because it can’t be easily expanded later.
That leaves 12.5GB for your desktop Linux and any free space for your apps or files. You can use Expert Mode to go smaller with either macOS or Asahi but I’d say they are reasonable guidelines.
Most of that is leaving enough space for macOS updates not to break. The minimal image requires 8GB of actual space by default, 3GB for boot stuff related to macOS and a default minimum 5GB root partition. I can actually go lower there, but I don't think most people would want to have a Linux system smaller than that. You'd quickly run out of space as you start installing stuff.
Excited about this. I like the Mac HW but hate MacOs. I can't delete iTunes, books, etc. Switched development to a Chromebook for a decent Linux experience.
Virtualization is working on Linux, but the macOS story is not that. We run macOS inside a custom VM hypervisor I wrote, on bar metal, passing through almost all the hardware.
macOS is unlikely to run inside a VM on Linux any time soon, because the desktop environment requires a paravirtualized Metal graphics device, which means writing an entire Metal implementation for Linux. You can run the XNU kernel in text mode though, even on non-M1 systems.
Would this hypervisor allow switching between two running instances of Linux and macOS? (given additional implementation work, etc) (e.g. by pausing one and handing the passed through hardware to the other os)?
No, you can't really save/restore the hardware context to pull this off. You'd need actual virtual devices for ~everything at some level. I won't go as far as saying it's impossible a priori, but it'd be a huge undertaking.
Excited to try this out. Still missing a few crucial features for me to fully adopt it as a daily driver but the speed of this project has been incredibly impressive. Can't wait to see how it looks 1 or 2 years down the line.
I'm out of the loop on this one. Is M1 a new architecture, and if so does that mean any distro would have to recompile every package in the repo to target it? How long would that take for a typical distribution?
It’s ARM but with a unique boot process and other proprietary hardware component (gpu, ssd). Check the previous hnews posts and original kickoff document:
No, M1 is ARM64, so any existing ARM64 compatible packages will work. However, some packages may not be compatible with 16K pages, so you may need the 4K page version of the kernel. Linux (unlike macOS) does not support mixed page sizes (edit: at least with M1) so it will result in reduced performance for the whole system if you need 4K pages.
Linux does support mixed page sizes (that's how huge pages work) and the page size isn't even detectable by userspace user than through sysconf(3) or getpagesize(2) so I'm not sure why a program wouldn't be compatible with 16K pages; after all, regular programs work just fine with THP on Linux.
That's not the same thing. Huge pages are in addition to standard pages. Being in 16K mode means no 4K pages. It's a global switch. Linux can't handle that, the baseline page size is set at compile time and affects a ton of macros and constants used throughout the kernel.
Userspace breaks on 16K pages when it tries to do things like call mmap() with virtual addresses that aren't aligned to 16K. Usually it's allocators doing this when they think the entire world is 4K.
If page size can be chosen per process (such as 4K pages for Rosetta apps) that's not unlike the existing hugepages. Are we sure that it cannot be easily adapted to support both 4k and 16k processes?
Don't 16K pages "exist alongside" 4k pages too, just not within the same virtual address space or (in Linux) VMA? How else are Rosetta apps supposed to work in Mac OS?
That's the issue: 16K pages never exist alongside 4K pages in the same address space half. The CPU has two mode flags, for kernel and userspace respectively. There is no way to mix modes within the same address space half. And changing the page size completely changes the page table structure and boundaries for different walk levels, the huge page size, etc.
Rosetta runs the userspace half in 4K mode, and XNU had to be reworked a lot to support this. Linux could of course be reworked to do something similar on paper, but it's a hugely intrusive change and it'd actually be easier to just make the kernel support 4K/16K pages in a single build first.
Hugepages aren't like that, they actually coexist with normal pages. In general, hugepages are just a pile of contiguous/aligned small pages that the kernel manages as a unit, and it flags them to tell the MMU "I promise these are all one big contiguous chunk so you can optimize it to one larger TLB entry". Depending on the page table structure they might be coalesced to a higher-level page table entry, skipping a page table walk level.
> And changing the page size completely changes the page table structure and boundaries for different walk levels, the huge page size, etc.
This looks like the real issue, so adding support for both "4k" and "16K" address spaces would involve support for multiple page table structures within a single kernel? Still seems very much worth doing since it can likely be extended to support e.g. 64K. And maybe other architectures could reuse that support depending on how their hardware support for multiple page sizes works, e.g. https://en.wikipedia.org/wiki/Page_(computer_memory)#Multipl...
That's implied in how page tables work. If your pages are 16K then all your page levels are going to shift up two bits compared to 4K. Again, these aren't 16K "huge pages", that'd be nice. This is changing the baseline page size.
Lots of things in the kernel count sizes in pages. If your page size can vary, suddenly a lot of kernel constants become boot-time variables. And if it can vary from process to process, suddenly lots of things are per-process. Say you run a 4K process. It wants to map some data from a file. That data is in the page cache in 16K chunks. Now you have one page cache page mapped to anywhere from 1 to 4 4K pages. How do you keep track of that? That wasn't necessary before.
What happens if a 16K process shares memory with a 4K process? If the 4K process sends the 16K process a 4K page, that page can't be mapped at all.
See how this is makes everything much more complicated?
> See how this is makes everything much more complicated?
Has this stuff been discussed elsewhere so far, e.g. on some linux kernel dev list? I think you've made a good case for not trying to support per-process page size right away, but many of these issues are not entirely new; they came up in some form as part of the transparent-huge-pages feature. It turns out that some hardware support already requires the kernel to understand "higher-order" mappings of contiguous physical pages, and "transparent huge pages" could leverage that support.
From what I've seen just bumping into some of the devs on twitter, many larger software packages (i.e. Chromium) used a hardcoded pagesize. AFAIK, Asahi doesn't actually support mixed pages—just 16k—due to some hardware quirks on the M1 platform, and so running in 4K compat mode wouldn't even help. This is obviously problematic if you're trying to enforce memory permissions on 4k boundaries since you can't simply pierce the huge page like you can in THP since there is no smaller granule to fallback to.
The 16k page size might be temporary in this port, the HW does support 4k pages as well.
In an earlier post they said this about progress on the iommu hw limitation leading to start out with 16k page size: "Sven took on the challenge and now has a patch series that makes Linux’s IOMMU support layer play nicely with hardware that has an IOMMU page size larger than the kernel page size" (https://asahilinux.org/2021/10/progress-report-september-202...)
Isn't it more about not breaking binary compatibility with existing Linux arm64 userland that has been stable for many years, than source level correctness bugs in apps? Or does the userspace ABI explicitly allow for switching between 4k and 16k page size?
(Also apps can always opt-in to bigger pages using the existing mechanisms)
Of course it's also true that 4k is a ridiculously small page size, we've been using the same size for 30-40 years while memory sizes have grown 5-6 decimal orders of magnitude. But from that POV we should now do a bigger bump than 4k->16k - I think the next bigger commonly used page size on ARM Linux has been 64k.
Page size is not set when you compile. At runtime, you can ask the kernel what the page size is, and use that value where appropriate. Some poorly written software hardcodes the page size on a per target basis.
I know there are some Unix specs that say that page size is reported via runtime interfaces, but I suspect that eg on x86 if you built a Linux kernel that gave userspace a 2MB page size, and started returning that from the sysconf(_SC_PAGESIZE) interface, many (most?) apps would break and shrugging that away with "those apps are just broken" would be a nonstarter. But maybe just counts as a a de facto user expectation and is not documented anywhere architecture in Linux ABIs?
I want to live in the alternate 90's universe where a slim laptop with an alpha processor and Apple badge exists. RISC architecture is going to change everything!
> According to Allen Baum, the StrongARM traces its history to attempts to make a low-power version of the DEC Alpha, which DEC's engineers quickly concluded was not possible. They then became interested in designs dedicated to low-power applications which led them to the ARM family. One of the only major users of the ARM for performance-related products at that time was Apple, whose Newton device was based on the ARM platform. DEC approached Apple wondering if they might be interested in a high-performance ARM, to which the Apple engineers replied "Phhht, yeah. You can't do it, but, yeah, if you could we'd use it." -https://en.wikipedia.org/wiki/StrongARM
And while the IP didn't directly influence Apple Silicon, the experience did:
> P. A. Semi (originally Palo Alto Semiconductor[1]) was an American fabless semiconductor company founded in Santa Clara, California in 2003 by Daniel W. Dobberpuhl,[2][3] who was previously the lead designer for the DEC Alpha 21064 and StrongARM processors. The company employed a 150-person engineering team which included people who had previously worked on processors like Itanium, Opteron and UltraSPARC. Apple Inc acquired P.A. Semi for $278 million in April 2008. - https://en.wikipedia.org/wiki/P.A._Semi
That’s fascinating and dovetails well with Windows NT’s origin story, where internal OS research at DEC that got canceled (the MICA project) ended up with its lead architect Dave Cutler and his team walking out and moving to Microsoft, where they used the knowledge they had learned building MICA to build Windows NT.
And then there is the DEC PDP, which enabled some of the former Multics developers (Ken Thompson & co) to take what they had learned building Multics and apply it to the affordable PDP hardware to build Unix, which then inspired Linux.
DEC may no longer exist, but their shadow reaches a long way into the future.
Also PA Semi's PowerPC chip[1]. Apple passed on doing the "in house CPU" with ingredients in hand, and instead took the painful ppc -> x86 transition.. which was possibly fortunate as the mobile product side was a better place to start growing their own silicon products.
Hah wow, that is too cool to see it all come back around with ARM. Imagine if they kept refining the Newton instead of shelving it until the iPad was really possible.
It's funny but you're not the only one. Digital had some great engineering. Apple had a great understanding of the consumer and UX. Could have been awesome. I also thought that a similar alliance would have been awesome. Imagine if Apple had bought Sun instead of Oracle.
I will say, I find it a little annoying that going to https://alx.sh directly in my browser redirects to asahilinux.org. What if I want to read the script before I run it?
I realize there are other ways to do that (e.g. curl), but why make it difficult?
As you note the browser can be sent something different than curl so reading what it does in a browser doesn't make much sense, especially since the script itself calls curl.
This is indeed a good point. Malicious actors can show you one script in a browser, then serve a more malicious script with curl. I do not understand why Asahi decided they needed a tiny domain just to serve a download script. It means users have to trust more domains with weird characters (was that A L X . S H or A I X . S H ? Oops).
It would seem to be convenient if https://asahilinux.org/ also worked the same way (serve the script if it's curl, show the page if it's a browser) for those that don't want to remember or use the short URL. If you pop that suggestion in the IRC/Matrix room they very well might make it so.
Complained about elsewhere. Complaints are generally because you want more security theater.
Edit: For the curious, you are literally trusting that address for an entire OS install. And you could do a hash comparison, but if the website was hacked to begin with, the hash you compared against could just be changed anyway.
Disappointed to see Marcan et al asking users to blindly copy and paste a curl command from the internet.
“Ready to give it a shot? Make sure to update your macOS to version 12.3 or later, then just pull up a Terminal in macOS and paste in this command:”
Other than that, fantastic work and I look forward to eventually having an M1 to run Asahi on
Edit: I just wish the wording would encourage you to verify the file. I don’t think there’s something inherently wrong with using shell scripts from the internet. But massive companies let domains expire, why won’t marcan?
If a hacker has the skill to make an evil boot loader, I'm prepared to be impressed. In contrast, creating a shell script to do something evil is literally the definition of a script kiddie.
At the very least, instead of the one liner give me a two liner: one to download to a file, then a second line to execute it.
> If a hacker has the skill to make an evil boot loader, I'm prepared to be impressed.
Prepare to be impressed, the bootloader this installs to be able to boot Linux was first made for the capability of transparently running macOS under its hypervisor intercepting every hardware call transparently and streaming it to remote PCs to enable reverse engineering.
The point is not about running the code, it is about verifying you are running the code you intend to run. That is why I would prefer the instructions to be: download the code, compare against this checksum, then run it. The curl pipe thingy makes it too easy to inject entirely different code.
The site telling you the checksum before you download the script from the site gives you no additional security. If the site is compromised then the site is compromised.
The site's instructions telling you where to find the checksum isn't any different than the site's instructions telling you the checksum.
If you're intentionally verifying what's on the site via a completely separate independent method (which is what you need to do to actually verify what's on the site is what should) then it doesn't really matter what the instructions on the site say in the first place as you just decided you're not relying on them for the verification.
What is still needed but not implemented yet is hashes inside the served script for content it pulls from the 3rd party CDN. That said there is a reason it's alpha.
The checksum only proves it came from where it says it did without modification. You still have to trust the original author to be honest about what the code really does.
The problem is, this install is open for exploit for anyone, who manages to access content on that server. A single exploitable security hole in the web server or its setup, and you can easily compromise anyone installing Asahi Linux. As you are even running a shell script, it is sufficient to inject one line into the installation script, which also installs malware while installing Linux.
No, it is a reasonable category. There is a huge distiction between things, which don't work yet, but you know that you can make them work and have even a schedule for doing them. And there are things, where you either don't have an idea yet how to make them work or cannot attach a schedule yet.
As with any software release: you have the list of things planned for the next release, even if they are not available yet and you have a list of things which definitely won't be in the next release.
Specifically, things on that list are things that work already, but are disabled or not merged into our release kernels yet because they aren't quite ready for end users.
So they "work" already, just not in the kernel packages I'm building :-)
Same here. I wonder if Proton is tightly bound to x86_64? My first generation M1 is pretty capable as a low to mid-tier games machine but even with stellar Rosetta 2 performance it's ultimately held back by still too few Mac ports in general.
The "FEX" mentioned needing 4k page size is about running x86* games on ARM64 much like Rosetta2, including Proton. Really need the GPU though, the standard desktop is already extremely taxing even though the CPU is extremely fast.
On the desktop side: I always assumed x11/xorg derived GUIs were purely software rendered? Has Wayland finally landed? Sorry I'm lagging behind >10 years of Linux on desktop.
It's a big ball of wax with more details than I'll ever memorize, but I know for a fact that for the last 20 years or more X11 GLX has had "indirect contexts" where the application feeds the OpenGL requests over the display socket and the server renders them (possibly) hardware-accelerated. "direct rendering" is where the X11 client negotiates with the X server to make a direct connection from the application to /dev/whatever to talk directly to the kernel driver of the graphics card to do rendering that bypasses the X server.
The whole thing with Wayland is just a new generation of coders saying "hey all this pile of new apis on top of old apis on top of older apis spanning back 40 years designed for hardware that no longer exists is insane. We need to start over." They aren't wrong, but they also haven't reached feature parity with the old stuff yet, because doing hardware accelerated 3D graphics over a pipe to a remote host was a pretty amazing feat of technology.
Edit: to be a little more specific, Silicon Graphics company's whole business was accelerated graphics, and they were in business since 1982, and invented most of X11, or at least its extensions.
In regards to acceleration pretty much all major Linux desktop environments have hardware accelerated compositing these days regardless if X11 or Wayland is being used.
In regards to Wayland landing or not it has become the default for Ubuntu, Debian, SUSE, RHEL, and many others but a lot of apps still rely on the XWayland compatibility layer to run. The Asahi Desktop install option currently defaults to an X11 Plasma session, I assume for simplicity at this stage, but Wayland works fine as well. Even if you try a desktop environment that requires accelerated rendering and has no native software fallback it'll work via a software rasterizer like LLVMpipe until real GPU acceleration arrives.
From the release note above, I guess FEX would have been the "Rosetta 2" for Asahi but seems to be also out of the question ATM so you are right no Proton indeed for the time being. I didn't know Proton was WINE-IS-NOT-an-EMULATOR based thanks for info :)
>There is a category of software that will likely never support 16K page sizes: certain emulators and compatibility layers, including FEX.
I would switch to linux if they could get BT to work. Macbook Air M1 owner here I need airpods to work otherwise it's not a viable switch. This this is true for many users. Please focus on that guys.
For some people, Bluetooth will be the one important thing and everything else that doesn't work isn't so important. Other people will be able to live without bluetooth but can't live without the HDMI port. Other people again may not use external screens but absolutely require hardware acceleration. Others still may not care so much about any of that stuff, but need the camera to work for their meetings. And maybe some are absolutely dependent on external storage accessed through Thunderbolt.
There's _a lot_ of work still. It's not clear that Bluetooth should be the highest priority of what's left. And this whole undertaking is a monumental task which it's amazing that volunteers are even attempting. Saying "I want bluetooth, please focus on that guys" here seems a bit tone-deaf.
It's a Linux project aimed at being a Linux OS to compete with MacOs. As per usual with the linux community it's "if you want it roll your own". That's why the year of the Linux desktop will never come.
It's an early alpha version of such project. Clearly stated at the beginning of post.
Your comment indicates that you are expecting the community to serve your feature requests, while not even pointing out to the "beneath you" opportunity to contribute yourself. If not willing to put the effort into desktop configuration or "rolling your own" features, why are you bothering with Linux? You can be better served by Windows or MacOS. But you might find your feature requests to fall on deaf ears there.
In my experience, a Linux desktop is unparalleled in flexibility, productivity and stability, if given some time/effort to configure and adapt to your workflows - it's been years and years "of Linux desktop" already for people who care.
This is ridiculous. Linux, including Bluetooth, works great on regular PCs. In fact with Pipewire, Bluetooth on Linux these days is generally in better shape than on other OSes.
Seriously my Jabra's and Senny BT headphones pickup and work flawlessly with my Linux desktop.
Meanwhile my Sennhesier headphones work sometimes with my POS mac to the point where I have to tell people If i don't immediately pick up it's because I'm wasting 15 minutes trying to get my headphones to pair with my Mac.
Yes, and every other mainstream OS is: "No, this is not supported.". I don't see how Linux is seen as worse then that answer. If anything this post proves how rapidly the Linux community moves. Within a year there is working Linux on FULLY proprietary hardware. And not just a funny PoC. No, you can daily drive this thing. Let's see how long it takes for Windows to run on the M1...
My "year of the Linux desktop" was 2018 when for the first time ever I was able to play a AAA video game (Deus Ex Mankind Divided) installed from Steam with nothing but normal GUI mouse clicks.
Sadly there haven't been many AAA games with Linux compatibility (that I wanted to play) since then. Valve is still pushing it though.
I love honestly love a lot of Chromebook hardware. It's honestly exactly what I want in a laptop - long battery life, efficient, lightweight, decent enough keyboard.
But it's impossible to get without giving Google money, and without dealing with a massive warning every time you boot - if you're model even lets you install something other than Chrome OS.
I should really grab a Pinebook Pro I guess.
And that's without getting into all the privacy issues of students being forced to use Google software/hardware, or the schools spying on students through the Chromebooks.
> But it's impossible to get without giving Google money, and without dealing with a massive warning every time you boot
You can get rid of the warning by physically removing a write-protect screw and replacing the firmware. It's still a very user-hostile feature because it's not just a 'warning' either, but actively prompts you to wipe your existing installation. That sort of thing is very much what you'd expect in a toy, not a machine for serious use.
The screws are long gone, to unlock you prove ownership by sitting for a few minutes pressing the power button when told.
Wiping the installation is very much expected. The security model includes protecting data, of course data has to be wiped when removing security. Data on disk is encrypted with a key stored in the security chip, which destroys the key when security settings are changed.
Android does the same for fastboot unlock.
What's the big deal? If you want to flash UEFI and use a custom OS, why would you want to preserve the ChromeOS partition?
The PineBook Pro was never fully supported by Linux despite shipping with Linux. I had one and sold it because of this. Suspend is still broken in mainline, for example. Direct NVMe booting is buggy. Hardware accelerated video is still in a weird state. Tons of tiny little things like this plague the device with many distros shipping their own fixes that never get upstreamed.
> The PineBook Pro was never fully supported by Linux despite shipping with Linux.
The postmarketOS folks have significant experience with mainlining kernel support for hardware that's only supported in downstream/vendor kernels. Please reach out to them and make sure they know about the issues; this work should be plenty relevant to them because Pine64 is also doing mobile HW.
> But it's impossible to get without giving Google money, and without dealing with a massive warning every time you boot - if you're model even lets you install something other than Chrome OS.
That's not even the most painful part. Full data wipe when switching between security modes is outright user-hostile, and would be widely considered as unacceptable if Microsoft or Apple did that on their general-purpose computing platforms.
> Full data wipe when switching between security modes is outright user-hostile,
I don't think it's user hostile to prevent all security models from being dragged to the lowest common denominator. If they didn't do that, an attacker could just switch to a more insecure mode and go after your data.
It's rather the reverse. How macOS handles it is by asking for user credentials on a security policy downgrade, and enforcing that by the Secure Enclave.
(and if you have user creds, you _have access to user data anyway_)
On other platforms, that tying... just doesn't exist.
On modern Android you also need user credentials for OEM unlock; the user needs to enable the developer options menu and allow unlock from there, otherwise the fastboot step won't succeed. (Notably, this preserves the effectiveness of FRP; even "wiping" the hardware and restoring the stock OS install won't allow unlock unless the existing user is verified). But a successful unlock still wipes all data, same as in previous Android versions where the user-side step was not needed.
A lot of US based students were provisioned Chromebooks by their schools.
"Against will" probably only applies to the families with people technical enough to understand what having an enterprise managed Chromebook that their child has to be on for 8 hours a day means.
Google sent me one when I was evaluating google cloud. I ended up using neither.
It was a very low grade HP chromebook that was shamed by the performance of a raspberry pi 3b. I have been thinking of gutting it and installing a pi inside to make a pibook.
If low-level embedded coding "is not different from any other C development", that's not really a positive remark about C. No wonder that software defects are so common in C code.
What are you expecting to be so different? You call the right functions with the right arguments to get it to do what you want. What language do you use that you can program by calling the wrong function with the wrong arguments?
Sounds like an area for a good, motivated technologist who appreciates lambdas and DNs to contribute, if the project is open source and accepting contributions.