The premise of the Rust half of the article makes no sense. 'Rust has more rules' is not true, references have more rules, and in unsafe code, you aren't using them very much. Comparing the rules that apply to references in Rust to the rules that apply to raw pointers in C and Zig is nonsensical: if you transliterate the Zig code to Rust, no new problems appear! 'What if I create a reference' is easily solved by not creating a reference, just as you are not creating a reference in the Zig code (because Zig doesn't have them). Every problem on Rust's end was just trying to use references instead of pointers. The article then trumpets Zig's tracing allocator as a unique benefit, even though you can do the exact same thing in Rust.
> What if I create a referecnce is easily solved by not creating a reference
Except it isn't easy. Rust implicitly creates references constantly and in unsafe code it may be UB for those to exist. It's why Rust has all the crazy offsetof macros and such because you can't reliably do the obvious &raw (*struct_pointer).field
This is not correct -- dereferencing a raw pointer does not implicitly create a reference. That code snippet is perfectly valid as long as the underlying pointer arithmetic is valid.
See the Rust reference on "Behaviors considered undefined": [0]
> Accessing (loading from or storing to) a place that is dangling or based on a misaligned pointer.
No problem here, dereferencing a raw pointer creates a place expression but we do not load or store from it.
> Performing a place projection that violates the requirements of in-bounds pointer arithmetic [1]. A place projection is a field expression, a tuple index expression, or an array/slice index expression.
Alright, so we're doing place projection. Let's check the link to see the requirements for in-bounds pointer arithmetic:
> The offset in bytes, computed on mathematical integers (without “wrapping around”), must fit in an isize.
> If the computed offset is non-zero, then [the base pointer] must be derived from a pointer to some allocated object, and the entire memory range between [the base pointer] and the result must be in bounds of that allocated object. In particular, this range must not “wrap around” the edge of the address space.
Okay, so your "obvious" snippet is sound as long as struct_pointer points to valid memory that's at least big enough to contain the field offset you're accessing. There's no rules about aliasing, or even alignment.
These semantics are more or less the same as the equivalent C code. They might even be slightly more relaxed, since I don't recall if C has requirements on alignment or strict aliasing when doing pointer projections. I have no idea what Zig's requirements are -- I spent about a half hour searching and couldn't find any documentation or discussion on whether and when pointer projection, or even just pointer arithmetic, can lead to undefined behavior in Zig. I'd be pretty uncomfortable writing pointer heavy code in Zig when I can't find documentation or even a GitHub issue on whether basic pointer operations can trigger UB or not.
1) Lack of a garbage collector does not make your program faster, it makes the performance more easily predictable in terms of latency.
It also makes it more friendly for memory bandwidth, CPU cache and to overall memory usage, which in turn results in better performance in real-case scenarios vs synthetic/toy benchmarks. This is particularly noticeable in constrained environments (like embedded systems).
2) Zig was never about memory safety, and it is not a memory-safe language.
It might have better plumbing than C, it might add better way to implement and abstract concepts.. but so does C++, for instance.
The more striking differences between C++ and Zig, IMHO, are syntax and the ability to use the same language instead of a separate one to do meta-programming (templates vs comptime).
3) Aliasing enforcement in Rust is there for a reason.
Two examples I quickly found on Zig's issue tracker:
>1) Lack of a garbage collector does not make your program faster, it makes the performance more easily predictable in terms of latency.
AFAIK, some time ago non-predicable GC behavior was the reason why Discord migrated from Golang to Rust. Also predictable latency is important for real-time applications.
> Writing a substantial amount of unsafe Rust really sucks the beauty out of the language.
I really disagree with this take, given the examples of unsafe code the article chose to exhibit. Trying to write C in Rust totally goes against the grain of Rust's ergonomics, which are oriented towards discouraging unsafe code patterns. It should be a pain to attempt something dangerous, and it should feel really easy to write safe code, because this setup naturally encourages the vast majority of code to be written safely.
This sounds like just prescriptive orthodoxy. In fact lots of applications, including big chunks of Rust's own standard library, need unsafe to correctly express their algorithms (doubly linked lists and balanced trees are famous examples of things that can't be borrow checked, the use case in question appears to be a collected heap which would likewise need to live in the same space).
Deciding that these areas "should be a pain" seems tantamount to saying Rust shouldn't be used for them. Which is sort of the article's point.
In Burroughs, still sold as ClearPath MCP, to use applications compiled with unsafe code blocks (NEWP is one of the first systems languages with unsafe concept), the admin has to enable their execution in first place.
Until then the executables are tainted and will trigger an error instead of being executed.
Only the standard library, and OS system tools, as Trusted Computing Base, are excepted from this.
This is the whole point of systems security, everything is unsafe down to silicone, the whole point is reducing the amount of Trusted Computing Base that has to be manually validated and certified.
Rust's approach is more like a "keep off the grass sign" than a barbed wire fence. If Rust really wasn't meant to be used for these things, it wouldn't have the unsafe keyword at all. But instead of forbidding those things entirely, it just makes them uncomfortable. This nudges the programmer to question if they really need to be on the grass to accomplish their goal, and helps subconsciously steer them onto the safer path.
> instead of forbidding those things entirely, it just makes them uncomfortable
The details in the linked articles rise well above merely "uncomfortable" though. If your response was appropriate, we wouldn't be here discussing them. In particular the interaction between the borrow and aliasing analysis is something I hadn't thought of before, and seems terrifying.
This gets back to my earlier point. The Rust community seems to have abandoned introspection for orthodoxy. You say something interesting about it, and the response comes back as a canon incantation about why Rust is always right. But there are interesting criticisms to be made, people just don't want to hear them it seems.
(In particular the bit about "you can't write dlist in safe rust" routinely seems to surprise people, even reasonably expert Rust developers, I talk to in real life. The flock's dedication to the borrow checker apparently means that no one is allowed to discuss or think about the places where it can't work!)
>In particular the bit about "you can't write dlist in safe rust" routinely seems to surprise people, even reasonably expert Rust developers, I talk to in real life.
Actually you can write dlist in safe Rust. But with some overhead in comparison with C, C++.
Agreed. This sort of handwavy genuflect to some idealized One True Rust Way instead of addressing (seemingly) valid complaints in a direct manor is perhaps the biggest thing pushing me away from interest in rust.
Apart from syntax sugar over which I agree should be improved, the argument sells to boil down to: my language doesn't enforce a borrow checker so it is UB safe. Which in my book is just not true.
The allocator example the user gives is not really good since if he had another try in between the alloc/free I assume the memory would not be freed, Drop semantic is better in that regard. You can also set a global alloc that check for missing free in rust though probably not as nice as a localized one.
The idiomatic way to handle that is with some combination of `defer` and `errdefer`. Then, in code, there are no lines between alloc/free, and if a try would exit before the free then you would instead first free anyway.
> drop semantic is better in that regard
The drop trait has its warts too. Zig's version sets up a point in time where the given cleanup code _will_ run (ignoring power outages and the like), and Rust's sets up a point in time where the given cleanup code is _allowed_ to run. It solves UAF, but it makes performance harder to reason about and makes leaks easier to write (a common theme in my complaints about Rust -- it's mostly a nice language, but many of the design choices slightly encourage leaky code, so most projects of any size and complexity have leaks). Having drops be that easy to write also encourages programmers to think in terms of small units of data rather than collections and common lifetimes, typically resulting in increased execution times.
> The drop trait has its warts too. Zig's version sets up a point in time where the given cleanup code _will_ run (ignoring power outages and the like), and Rust's sets up a point in time where the given cleanup code is _allowed_ to run.
The reason they are allowed to run is because panics can be configured to be abort, so people are discouraged from using Drop to maintain safety constraints. But any language will have issue executing code before exiting if you go out of your way to set things up so normal unwinding doesn't occur.
> It solves UAF, but it makes performance harder to reason about and makes leaks easier to write (a common theme in my complaints about Rust -- it's mostly a nice language, but many of the design choices slightly encourage leaky code, so most projects of any size and complexity have leaks).
I don't see how Drop causes leaks.
> Having drops be that easy to write also encourages programmers to think in terms of small units of data rather than collections and common lifetimes, typically resulting in increased execution times.
Beyond my skepticism about the mechanism being a significant source of performance degradation, writing collections that take responsibility over dropping their contents in one go isn't particularly difficult or restricted by the language.
> You're making it sound like Drop is regularly not called
Sorry if it came across that way. My claim is weaker, that it's sometimes not called even in cases where it could be with a smart enough programmer/language.
> But any language will have issue executing code before exiting if you go out of your way to...
Is there an easy way to make Drop work nicely with signal handler induced shutdowns other than having a "correctly" configured panic handler and panicking in the signal handler (even that causing other UB depending on what code Drop runs)? The edge cases for the feature look thorny from my perspective, but (some) other languages handle that by forcing you to be explicit about which code runs when.
> I don't see how Drop causes leaks
"Relying" on Drop is the most common source of Drop-related leaks. All it takes is one extra reference to cause leaky code (see Actix as a case study in such problems).
> Beyond my skepticism about the mechanism being a significant source of performance degradation
That's... pretty easy to test? Pick any problem that potentially has a lot of allocations, like an async sudoku solver, and check how it performs with a collection (arena, pool, ...) vs using Drop on each item. I promise the difference is substantial.
> writing collections that take responsibility over dropping their contents in one go isn't particularly difficult or restricted by the language
No, not at all. I totally agree. The concept of "intentional friction" is useful here. An expert can make excellent Rust code. My last job had one person like that (not that it matters, but that was out of many people who wrote Rust there), and you have shining examples like BurntSushi. My complaint is that the language's happy path doesn't guard against those performance mishaps, and since most Rust code I see in the wild has fallen prey to those pitfalls I think it's a reasonable complaint.
>The idiomatic way to handle that is with some combination of `defer` and `errdefer`. Then, in code, there are no lines between alloc/free, and if a try would exit before the free then you would instead first free anyway.
Do I understand correctly, that talking about idiomatic way is the same as talking about programming style, but not about language semantic?
Correct. Much like in Python or C, you can explicitly handle every error/break/early-return condition and manually free the resource. Each language offers a tool to make that easier (and more likely to be correct, especially as the code changes) in most cases though. In Python it's try/finally, in C it's `goto ERR`, and in Zig it's defer/errdefer.
You literally don't leak memory in safe rust, that is kinda of the whole point. If you use smart pointers sure you can leak but that is not only rust, any ref counted abstraction has that.
Drop IMO is preferred over defer because you can't forget to call a free. If you encapsulate your unsafe (which you should), you can guarantee it will be freed. I much prefer that. If you want to free early just call drop manually, after that if the memory is not freed then that on you to set an allocator basedon your needs.
> You literally don't leak memory in safe rust, that is kinda of the whole point.
That's a common misconception. Safe Rust prevents use-after-free, double-free, accessing uninitialized memory, (an extremely constrained form of) data races, and a number of other things. It doesn't give a rip about leaks beyond a best effort. Leaks do not violate memory safety.
Part of the reason it doesn't care is because it's hard to infer your intent. A common class of bug is accidentally boxing a reference to something you'd like to drop (this and similar flaws afflicted Actix till at least the 3.0 release). Rust confirms you don't have UAF and that you don't drop the data before you're done with it, but by accidentally writing it so that it lives nearly forever that drop is never executed.
Rust, similarly, makes no guarantees about drop's behavior at any point in time (other than that it won't be executed to soon and should usually attempt to be executed eventually), but definitely not near program shutdown or similar. It's especially dangerous because a huge fraction of programs run correctly regardless most of the time.
I like Drop. Memory leaks in Rust are incredibly uncommon. But they are considered memory safe. Box::leak is declared safe, as per the leakpocalipse right before 1.0.
A memory leak that is still pointed at by a dangling pointer is not going to cause UB, even though it is still a bug.
> Rust’s raw pointer types are nullable by default and have no null-pointer dereference checking. There is a NonNull<T> pointer type that gives you more safety. I’m not sure why it’s not the default, since Rust’s references are non-nullable.
NonNull<T> would probably be a better default indeed, but one reason might be that NonNull<T> has more UB than *mut T (namely, it is UB for it to be the null pointer).
(A lot of the text is about how more UB makes unsafe programming harder, which is a fair point)
> Zig pointers, by default, are non-null and you opt-in to nullability using the ? syntax (e.g. ?*Value). And of course, you get null pointer dereference checking enabled by default too.
Well so Zig pointers, by default, has more UB than Rust pointers, at least when concerning null pointers.
> Well so Zig pointers, by default, has more UB than Rust pointers, at least when concerning null pointers.
Not quite. In Zig, a (non-C) pointer type is closer to a Rust reference in its behavior. It isn't null, and that's enforced by the type system.
The author's point about opting into nullability is that in a function signature you can choose to allow null pointers to be passed to the function by making the argument type an optional pointer, a weaker type with the same safety guarantees (because, like with Rust, the type system forces you to handle the null branch).
Zig has a singular focus on doing low level systems programming and it shows.
Rust is very nice for systems programming, but it's not as ergonomic as Zig at all. Of course, it's a much nicer and more comfortable general purpose language.
I think both languages are great and hope to see them used more and more in the future. It's much better to debate when to use Zig and when to use Rust then go back to the dark days of C and C++.
But, ran by a BDFL that doesn't ever want to become a foundation and share power, is known to throw tempter tantrums, and freaks about LGBTQ people and creators.
I wrote this on Mastodon, but I will copy it over here. Because he spouts nonsense all across the forum on a fairly regular basis.
The creator is a BDFL but they are against setting up a foundation because they <i>"lost faith in mankind"</i> and then ranted about "master vs main" on git, and used that to segue into a mini-rant on essentially "woke" ideology. In less than 500 characters.
> Here is a hint: If you are obsessed with racism and sexism it's because you're a racist and sexist. Now go and cancel "He-Man - Masters of the Universe" because obviously He-Man is a slave owner.
I get the vibe they might be racist, sexist, and just a bit out-of-touch with reality. I wouldn't trust having corporate code, or any code, built on it. Dude is a bit unhinged in that regard.
He's not unhinged, just a right-wing politically.
If you believe it's a crime or that we should all ignore software based on political alignment of their creators, thats telling me more about you, than Nim's BDFL.
> all across the forum on a fairly regular basis.
Two times, more than 2 years ago is not "all across" and neither it's "regular". I am a regular user on the forum, and I haven't seen any rants apart from two you referenced here.
> The garbage collection is the important part, it’s hard to make it work and be fast and be safe because its fundamentally a problem that doesn’t play nicely with the borrow checker.
That's fair, use the right tool for the right problem. For most applications you can just use a garbage collected language after all.
That’s not to say Piccolo couldn’t theoretically benefit from having its (already isolated) unsafe parts rewritten in Zig, but that’s up to the Zig developer community to chip away at with upstream contributions.
I feel like there's space for imperative, garbage collected, pattern matching, and ADT language. Just Golang + pattern matching + ADT.
I would be sooooooooooooooooooooooooo happy. I think there ought to be a subset of Rust that uses Arc for everything, maybe with a language feature that desugars into it. Obviously this requires compilers to be able to reason and optimize out atomics.
I tried OCAML, My problem with it is that it smuggles in a lot of other concepts too. Especially with the thing about bindings and mutability. Also, the syntax.
Every FP languages tends to be "pure" and remove every "unnecessary" syntax to achieve some kind of "simplicity".
No, I want a true cavemen language. Treat me like I'm a low IQ idiot. "This function returns Option<T> DURRR ME HAVE TO MATCH ALL. ME DON UNDERSTAND WHAT "_" CLAUSE IS. ME NO UNDERSTAND LIFETIMES, LAST TIME I WROTE THE C BINDING ME BRAIN ALL FRY".
There's a space for language for people who:
1. learn the language in 1 hour
2. spend 100 hour writing the most mundane, verbose code
instead of:
1. learn the language in 100 hour
2. spend 1 hour writing the most beautiful, concise and pure code
I think we might get these eventually in Golang, at least to a degree. In the end it's just a question whether they will be zero-cost like in Rust or incur some runtime overhead (I would assume the latter). Would be fine for me, I rely a lot on reflection for more complex use cases around dynamic typing and that already has a significant runtime cost (which is still ok in most cases).
C# is headed this way. Typescript is there but only if you can ignore the fact that it is JavaScript masquerading as a more serious language for halloween.
No, there's parts of Roc that are in Zig[1]. At a glance, it seems like they moved mostly-wholesale over to Zig at one point, then moved back while still keeping some parts.
Does zig now enforce memory ownership rules and signal all things have well defined behaviors? No? Ok then its not safer. Title is misleading at best and downright false at worse.