> Native AOT is different. It’s an evolution of CoreRT, which itself was an evolution of .NET Native, and it’s entirely free of a JIT. The binary that results from publishing a build is a completely standalone executable in the target platform’s platform-specific file format (e.g. COFF on Windows, ELF on Linux, Mach-O on macOS) with no external dependencies other than ones standard to that platform (e.g. libc). And it’s entirely native: no IL in sight, no JIT, no nothing. All required code is compiled and/or linked in to the executable, including the same GC that’s used with standard .NET apps and services, and a minimal runtime that provides services around threading and the like.
Of course it does have some downsides:
> It also brings limitations: no JIT means no dynamic loading of arbitrary assemblies (e.g. Assembly.LoadFile) and no reflection emit (e.g. DynamicMethod), everything compiled and linked in to the app means the more functionality that’s used (or might be used) the larger is your deployment, etc. Even with those limitations, for a certain class of application, Native AOT is an incredibly exciting and welcome addition to .NET 7.
I wonder how things like ASP .NET will run with Native AOT in the future.
We've been experimenting with NativeAOT for years with ASP.NET Core (which does runtime code generation all over the place). The most promising prototypes thus far are:
They are source generated version of what ASP.NET does today (in both MVC and minimal APIs). There are some ergonomic challenges with source generators that we'll likely be working through over the coming years so don't expect magic. Also its highly unlikely that ASP.NET Core will not depend on any form of reflection. Luckily, "statically described" reflection generally works fine with NativeAOT.
Things like configuration binding, DI, Logging, MVC, JSON serialization all rely on some form of reflection today and it will be non-trivial to remove all of it but we can get pretty far with NativeAOT if we accept some of the constraints.
Thanks for telling me :). On a serious note though, anything can work with source generators but it doesn't match the style of coding that we'd like (moving everything to be declarative isn't the path we want to go down for certain APIs). Also source generators don't compose, so any source generation that we would want to use would need to take advantage of the JSON source generator (if we wanted to keep things NativeAOT safe). Right now most of the APIs you use in ASP.NET Core are imperative and source generators cannot change the callsite so you need to resort to method declarations and attributes everywhere.
That's not an optimization, that's a programming model change.
Well in system text json the api keeps the same and you „only“ need to pass an autogenerated meta object its basically the same api you just need to pass another object instead of an generic Parameter. But yeah its a change.
Java is doing something similar with Graal Native Image thing. In a way it is funny to see few years back so many people were claiming that heavy CPU/memory usage of JIT based platforms would be even more non-issue in Cloud because one can scale as much they need on demand.
And now I see AOT/Slimmed Runtime/Compact packaging are happening largely because of cloud deployments.
Seems cloud bills are making impact in places where enterprise cloud architects were running amok with everything micro service deployments.
Actually I think it is more competition pressure of languages like Go and Rust than anything else.
I have been using Java and .NET languages for distributed computing for the last two decades, and JIT has always been good enough.
By the way, Google rolled back their AOT compiler introduced on Android 5, and since Android 7 it uses a mix of highly optimized interpreter written in Assembly, a JIT compiler with PGO feedback, and when the device is idle, those PGO profiles are used to only AOT compiler the application flows that matter. On more recent Android versions, those PGO profiles are shared across devices via the Playstore.
On the .NET front, I think the team has finally decided to make front of the whole C++ rulez of WinDev, specially after Singularity and Midori projects having failed to change their mind.
Nowadays Go is working on a PGO solution, so there is that.
I don't disagree but kinda doubt that new upstart languages with single digit market and mindshare in enterprise space would force .net /JVM behemoth to do anything. Forget Go, places I work would not know a single new thing beyond Java 1.7 or latest Java 1.8. But they have stood up a dozen cloud teams doing every buzzword you can hear about cloud.
So unless finance guys ask tough questions about rising Amazon billing IT wouldn't care if their SpringBoot crapola take 2GB of RAM or 32GB.
The new upstart languages are where the younger generations are, that is why you see .NET doing all those changes to make rolling an Hello World website as easy as doing it in Go, with global usings, single file code, simplified namespaces and naturally AOT.
I do extensive profiling of managed apps and while the JIT does eat a measurable amount of CPU time, it's really not much. And at least on .NET, you can manually ask the runtime to JIT methods, so you can go 'okay, it's startup time, I'm going to spin up a pool of threads and use them to JIT all my code' without blocking execution - for a game I'm working on it takes around 2 seconds to warm ~4000 methods while the other threads are loading textures and compiling shaders. At the end of that the CPU usage isn't a problem and you're mostly dealing with the memory used by your jitcode. I don't know how much of a problem jitcode memory usage actually is in production these days, but I can't imagine it would be that bad unless you're spinning up 64 entirely separate processes to handle work for some reason (in which case I would ask: Why? That's not how managed runtimes are meant to be used.)
I mostly do JIT warming to avoid having 1-5ms pauses on first use of a complex method (which is something that could be addressed via an interpreter + background JIT if warming isn't feasible.)
The slimmed runtime and compact packaging stuff is definitely attractive in terms of faster deploys and lower storage requirements, however, because the cost of needing to deploy 1gb+ worth of stuff to your machines multiple times a day is still a pain. I don't actually know if AOT is an improvement there though, since in some cases the IL that feeds the JIT can be smaller than the AOT output (especially if the AOT compiler is having to pre-generate lots of generic instances that may never get used.) You also have to ship debug information with AOT that the JIT could generate on demand instead.
I 've never understood the point of JITing. Just compile the thing once for your target architecture and you're done. No more spawning threads for doing the same thing over and over again. I'm glad that languages like Go and Rust are bringing back the lost simplicity of yesteryear's dinosaur languages. Life can be so easy.
Reflection in C# is a thing, it isn't in either Go or Rust. I've read some kind of compiler shenanigans can get something that resembles reflection in C++.
I love reflection, the fact that libraries can look at your types is really really cool.
Not saying it can't be done with a compiled language but I don't see it anywhere.
Being able to load a shared library, search it for classes implementing interfaces, instantiate them and call their methods is pretty slick too.
Does that cover all mentioned usecases though? They're discussing source generation in this thread, not even close to the same thing (for certain usecases)
IBM OS/400 binaries (nowadays IBM i), can be executed in a completly different architecture from 1988, without any changes, possibly the source code doesn't exist anymore, while taking full advantage of IBM Power10 on its last iteration.
Same applies to any other bytecode format making use of dynamic compilers.
Jitting is useful for things like dynamic loading/dispatch and simplifying distribution of libraries around it.
FBOW it's worth remembering the 'cruft' and legacy around .NET's Architecture designs and choices, even if some of the assumptions were wrong.
1. .NET seems to have been originally built under a heavy assumption that the framework was installed on a device. This could be a huge advantage for smaller/embedded devices of the time, if you were working 'close to framework' (as was the style at the time, NuGet wasn't a thing for the better part of a decade after .NET was introduced) you could get a surprisingly compact deployment object. Many of the 'in-house' apps I wrote with clickonce, the cert validation seemed to take longer than the network transfer on upgrade.
1.1: VERY worth noting, the idea that .NET browser plugins would compete with Java plugins; JIT is probably important to do this sanely.
2. Speaking of assumptions, I'm willing to bet that around the design time of many .NET particulars, they were still reeling from the pains of both Win16->Win32 breakages (Win95, Win98), Win16->NT breakages (NT4.0, 2K), Win32/WinNT breakages (XP), and X86/X64 breakages (Server 2003). JIT, alongside a framework API with strongly contracted public members [^0] allows problems to be fixed without vendors necessarily having to provide updated libraries [^1]
3. CAS (Code Access Security) and 'Partially Trusted code'. This somewhat ties back into 1.1, because the idea was that code had to have a certain level of 'verification' around what it was doing, what libraries it was calling, and whether that code -should- be allowed to execute such on a box. I'm thinking of scenarios where an 'intranet' app could have access to a local MSSQL database and open/write files, but an 'internet' app could not. Implementing such via a JIT is far simpler.
4. The combination of Generics and reflection complicate things. To be more specific, the way .NET handles generic methods, basically any reference types (objects) will share the code, as the size of the generic parameter(s) is going to be the size of a reference on the target platform. But in the case of a Value Type (structs), I'm 99% the runtime requires specific code for every different struct[^2] type that is slotted into one of the generic types of a class/method.
5. Simpler dynamic-ish code generation. One of the most lovely (if unloved/ignored) APIs in .NET is the Expressions namespace.
[^0] By this I mean there is a -strong- slant towards .NET APIs retaining existing behavior even if it is wrong/subpar, if that behavior is part of the accepted contract.
[^1] How well this worked out in practice, probably not so much. I remember my first experience with NET 1.0/1.1 versioning/deployment hell and going back to C++ for a few years. It wasn't until my apps had semi-complex UIs (i.e. not console and multi-window) alongside the renaissance of 3.5 that .NET became a more efficient workflow than the sometimes daunting MFC/ATL C++ winforms workflow.
[^2] AFAIK the runtime cannot share struct implementations between two structs with the same layout (I -think- go generics can based of GCShape but maybe not, I'm not a gopher,) but would be delighted to hear otherwise.
I think typical ASP.NET applications likely won't use AOT in the near future. There is quite some "magic" in ASP.NET that I suspect won't work out of the box with AOT because it is heavily reflection-based. And while there are solutions for cases like JSON serialization, as far as I understand they require writing different code right now, so it's not automatic.
AOT is much, much more interesting for cases where startup time matters, and ASP.NET usually isn't such a case. But I can imagine that it will be much more useful and easier to implement for smaller, focused tools that don't use that much reflection-based framework code. CLI tools might be the ideal case for trying this out.
As someone writing a .NET-based API hosted on AWS Lambda, I assure you, startup time matters a ton. I spent a bunch of time getting cold start time down to a semi-reasonable number, but there's still a ton of room for improvement. I for one am very much looking forward to progress on AOT Native.
The "typical" in my comment was meant to take care of that part, I probably should have mentioned this explicitly. AWS Lambda is of course a case where startup time matters a lot, but it still leaves the common problem of ASP.NET that it was designed around a lot of reflection. My understanding is that this simply won't work with AOT out of the box unless you adapt all the places where you use reflection-based code. For example you'd need to use the source generator versions of JSON serialization and the EF Core DbContext in this case, with all limitations those have. But I'm not sure my understanding here is entirely correct or complete.
I think you are correct. The DI container and JSON serialisation are two bigger areas that i suspect would need attention but as you say source generation solves the later and I think there is non reflection DI on the way if not already. I guess you would need to wait for any of the third party assemblies you depend on to allow for no reflection also...still a bit of a wait i think for my use case.
I don't know about ASP.NET, but the post says that Reflection.Emit is not available. Reflection.Emit is a namespace used for runtime code generation; I would assume most other parts of reflection are available, like with previous iterations of AOT support in .NET.
It is theoretically possible to have Reflection.Emit and DynamicMethod for less critical cases by using an interpreter - this happens in certain scenarios already.
As someone writing a .NET-based API hosted on AWS Lambda, your case is quite different to a "typical ASP.NET hosted application", that can run for days between restarts.
I would like to host .net based api on google cloud run and azure container apps, which often will go to sleep after about 10 minutes without a request, so cold start time do matter quite a bit. I've done pretty extensive testing and see ~4-7 second cold starts with classic non aot mvc versus ~500ms with natively compiled golang/.net.
The reality of the cloud is that sleep & cold starts are very common and in fact necessary to run efficient systems. Half a second is acceptable but 4+ seconds is not and so aot is very important for future apps.
FWIW the problems with cold starts could also be solved at the platform layer. App startup is what typically dominates cold start times and there's creative ways to potentially frontload that, serialize the state, and treat all invocations as "warm".
The good news is that .net7 aot supports console apps so lamdas can already be native aot. Json serialization was a concern of mine, but it was surprising easy to get working with very minor changes in the call. If I were building lambdas would move it asap.
> I wonder how things like ASP .NET will run with Native AOT in the future.
Let me give an example of an ASP.NET app lifecycle: an instance is launched, goes through startup code once. When it reports itself healthy, it is put into the Load balancer and then starts handling requests. Code in these paths is executed anywhere from occasionally to 1000s of times per second. After around 24 hours of this, it is shut down and restarted automatically.
So, compiler micro-optimising startup code to take ms off of it, is not interesting at all - it's only run once a day. Startup can take whole seconds, it makes little difference, the instance is ready when it's done. AOT in general isn't that important, but automatic tiered compilation based on usage data is very nice.
Running an ASP.NET app in AWS Lambda is just a few lines of code. However, all of a sudden startup time becomes important for both performance and cost.
These investments by Microsoft and others[1] allow .NET to remain relevant and viable for modern use cases.
Serverless is great, but if you want to go that route you should be aware that it is in no way a typical hosted ASP.NET app, and while you can "Run an ASP.NET app in AWS Lambda" with little code, there are are better ways to design a Lambda.
Can you elaborate on “is in no way a typical hosted ASP.NET app”?
I get that the machinery under the hood is different (ie. Kestrel web server may not get used). However, we typically don’t care about those details. Our ASP.NET code runs in 3 separate places (containers, servers, Lambda) and the only difference between all 3 is a single entry point file.
Do you mean because Lambda is only serving one request at a time and has a more ephemeral host process lifetime?
Anyone have any interest in a small c# aot micro web framework that cold starts up fast (~500ms, vs ~4s for non aot mvc) on serverless containers like google cloud run?
https://devblogs.microsoft.com/dotnet/performance_improvemen...
> Native AOT is different. It’s an evolution of CoreRT, which itself was an evolution of .NET Native, and it’s entirely free of a JIT. The binary that results from publishing a build is a completely standalone executable in the target platform’s platform-specific file format (e.g. COFF on Windows, ELF on Linux, Mach-O on macOS) with no external dependencies other than ones standard to that platform (e.g. libc). And it’s entirely native: no IL in sight, no JIT, no nothing. All required code is compiled and/or linked in to the executable, including the same GC that’s used with standard .NET apps and services, and a minimal runtime that provides services around threading and the like.
Of course it does have some downsides:
> It also brings limitations: no JIT means no dynamic loading of arbitrary assemblies (e.g. Assembly.LoadFile) and no reflection emit (e.g. DynamicMethod), everything compiled and linked in to the app means the more functionality that’s used (or might be used) the larger is your deployment, etc. Even with those limitations, for a certain class of application, Native AOT is an incredibly exciting and welcome addition to .NET 7.
I wonder how things like ASP .NET will run with Native AOT in the future.