Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fuchsia IDL Overview (fuchsia.dev)
190 points by todsacerdoti on March 17, 2022 | hide | past | favorite | 158 comments


What I like about Fuchsia is that it is unapologetically component oriented. Such that one of the first pages in the documentation originally pointed at this [0] which brought me right back to the 1990s. Component orientation for me is "Object Orientation - the good bits". You could argue that things like containers are component oriented but it's worth trying to do this right from the ground up with capabilities.

[0] https://en.m.wikipedia.org/wiki/Component-based_software_eng...


FIDL could become the defacto standard for cross language communication. I really like where fuchsia is headed.

Edit: I say this because fuchsia seems to be an attempt to build an OS from scratch. Introducing a new IPC layer built specifically for the permissions-based language agnostic model of the zicron microkernel will make cross language/cross application interoperability way easier.


It is just yet another IDL.

Sun RPC, DCE, CORBA, COM, D-BUS, gRPC, AIDL, XPC,.....


I thought I would be old when I would be able to understand such a statement...

I'm old :-)


Capn'Proto would be the closest thing to FIDL that I am familiar with. It has a capability / interface passing mechanism.

DBus as well, but DBus is horrible.


Unfortunately in a lot of microservices architecture, there's REST/JSON as well. Wouldn't be surprised if that's used on local machines as well.


A lot of terminal programs already use json over stdin and stdout, making it possible to create very long chain of piped programs. Besides the power of jq to filter that data, jc makes it possible to convert output of many classic program to json.


Don't forget Amazon's IDL smithy [0]

https://awslabs.github.io/smithy/


ASN.1


...will be around when roaches and x509 certs inherit the earth


Ugh. An utterly unparsable language designed to insert security-holes into applications using it :-).


I had to decode an ASN.1 message and I remember it was easier to reverse engineer the bytestream rather than understanding the spec.


MIG for mach


COM uses MIDL btw


COM/MS-RPC/MIDL are all different things. COM is build on top of MS-RPC and MS-RPC use MIDL to describe their protocol


And MS-RPC and MIDL are derived from DCE's RPC and IDL.

DCE's RPC and IDL were derived from Apollo's NCS RPC and IDL.

I'm unaware of a direct link between NCS and Mach, but Mach's RPC and Matchmaker IDL (and the 'mig' tool) I think pre-dated NCS? Ironically, they're the basis for Apple's XPC today.


Sure, I should have been more precise.


And Dropbox's https://djinni.xlcpp.dev

Now maintained by independent developers.


XML-RPC


Yep. And now there is a fun build step for translating these interface definitions into C++, Dart, Java ... plus a ton of glue code, some of it probably generated.

Instead of creating a new language for interface definitions; why not just use an existing general purpose language? C++, Objective-C, Java, TypeScript ...

Or take this new language you have created and extend it with control structures and other basic language features, and write the OS in that.


Hehe, this is the opposite route, with same destination, as the rant on C headers as an IDL, from todays front page. https://news.ycombinator.com/item?id=30704642 “C Isn't a Programming Language Anymore”


Sure - that is why I would go for last suggestion if I were to build a new OS: Build a new language first.


For an example of a language made for building an operating system / an operating system made for a language:

https://en.wikipedia.org/wiki/Oberon_(programming_language)

https://en.wikipedia.org/wiki/Oberon_(operating_system)



Besides the genius of the people involved.

In my opinion Oberon (and similar) is an example of a fundamental massive task (building an OS from scratch) actually becomes easier if the scope of the task is expanded (by creating a language suitable for the task).

Google has so many resources and so many dispersed projects that individually are world class (virtual machines, programming languages, UI design, OS microkernels etc.) but they don't bring them together to a coherent something that really could bring things forward.


> Google has so many resources and so many dispersed projects that individually are world class (virtual machines, programming languages, UI design, OS microkernels etc.) but they don't bring them together to a coherent something that really could bring things forward.

This is the curse of being a big company. Google is not unique in this sense. I remember a conversation I had with Cory Doctrow few years back. I met him at Cambridge's(UK) Judge Business School where he was giving a lecture to promote open-source ideology. After the lecture I was walking back home and he was going to the bus-stop. So we started chatting, and I bragged about my employer telling him that my company has open-sourced its Operating System. He looked stunned and asked "Which company??". I said Symbian :) And he said, "Ah! Nokia!" :) And then he pulls out an N95 out of his pocket and says "Nokia makes fantastic hardware, but look at the software! You can't even find a menu item. One has to press 2-3 buttons to just find an app." Then he pauses as says, "I visit Nokia, and meet lot of people there. They have some of the smartest people. You talk to them individually, you realise how smart they are." And then he waves the N95 and says, "But you put them together, and they come up with this!" :) I understood what he was saying!


At least Nokia was trying. I don't see Google bringing their programming language or virtual machine people into Fuchsia. (Or maybe I am missing something?)



Sure they support Dart but Fuchsia is not implemented in it and the interface definitions would have to be translated to Dart in a build step like with other langauges.



You don't want general purpose languages for IDLs, full-stop. You need straightforward parsing, that is easy to standardize and implement, with some guarantees about bounded execution. That means no recursion, usually means no or limited loops, ideally single pass parsing, actual context-free grammar, etc.


"You need straightforward parsing, that is easy to standardize and implement, with some guarantees about bounded execution. "

Surely you can find a general purpose language that gives you that ...


General purpose languages like being turning complete, not as a main goal per se, but it's hard to be expressive and not be TC. TC means you can't statically determine execution behavior (halting problem). There's not a lot of things in this category. Some that come to mind are Cue and Dhall.

Hashicorp HCL2 might also be, but I don't know if they are turning-complete off the top of my head. Some blog suggests it actually is TC, but I cannot corroborate.

Starlark/bazel is TC.

You can add Berkeley Packet Filters and BLooP to the list of TI languages: https://www.quora.com/What-is-an-example-of-a-Turing-incompl...

https://news.ycombinator.com/item?id=28915655

https://dhall-lang.org/#

https://njoseph.me/mediawiki/DevOps/Terraform


I believe Starlark being turing-complete would be a bug. The language takes steps to avoid it by disallowing recursion and only allowing iteration over finite sequences.


WSDL



What they are doing here isn't radically different or new. At the high level it looks very similar to Android's AIDL[1]. That said, I loved AIDL and had the same impression. I'd love for something like this become the new standard in IPC. It was such a pleasure to work with and I've missed it since I left Android OS work.

1. https://developer.android.com/guide/components/aidl


Does FIDL have no more overhead than a non-inter-process statically dispatched C function call to a non-inlined destination? Does it enable the multi-language equivalent of allocating freely mutable data structures (like structs and std::vector and Rust enums) in one file, and accessing its fields or calling inlined methods in another file in the same language? One thing I've observed is Protobuf has caveats when mutated in-place (used as in-memory mutable application state) and Cap'n Proto is effectively impossible to resize objects, whereas std::vector<complex struct> is difficult to access across languages without treating it as a pointer to opaque type.


> One thing I've observed is Protobuf has caveats when mutated in-place (used as in-memory mutable application state) and Cap'n Proto is effectively impossible to resize objects

To clarify slightly: Encoded Protobufs cannot be mutated in-place at all. You have to parse into language-specific in-memory objects, mutate those, and then serialize again. Cap'n Proto does support in-place mutation, with the caveat that if any object changes size, it'll be moved to the end of the message, leaving behind a "hole". You can "garbage collect" the holes by copying the whole tree into a new message, which should be much less expensive overall than parsing and serializing the whole message as Protobuf would require.


Sorry for that. In any case, is it practical to design cross-language FFI ABIs with a "wide" API like C++ or Rust code, without constraining interaction through a narrow inexpressive waist like text over HTTP, bytes or serialization over system calls or IPC, or arguably COM opaque objects?


Mostly I'd say "no". At least, not much better than Cap'n Proto supports now.

If you could convince multiple languages to share the same notion of a heap (including how to malloc() and free() memory), you might be able to get closer, allowing for truly mutable objects without the caveats Cap'n Proto has. However, memory allocation is a major place where languages tend to differ wildly.


That's basically what wasm attempts.


No, FIDL is optimized for IPC, not in process communication. That said I think it can be used quite well for use cases where you can stomach the overhead of an in process channel such as go channels or rust mpsc. Personally, I think channel based communication is one of the more viable cross language communication strategies.


for protobufs (probably not the best example, but still) there is a way to improve the performance if you sacrifice some of the functionality. For example https://perfetto.dev/docs/design-docs/protozero (perfetto is a profiling system, and uses (AFAIK) protobuf to communicate data between the system/app and itself).


This is very much like the DICE idl for the l4 kernel [1] .

[1] https://wiki.tudos.org/Dice_IDL_compiler


How does backwards compatibility work in FIDL? I'm familiar with protobufs which allow:

- Renaming a field: Does FIDL rely on the traversal order for stable IDs?

- Converting the type of any field backed by a varint to any other varint field (bool, ints)

- Converting a singular field to a repeated field.


ABI and API backwards compatibility is written up here:

https://fuchsia.dev/fuchsia-src/development/languages/fidl/g...


I’m struck that the project is called Fuchsia, but the website is gray white and teal.


How many RPC/messaging protocols are there to come out of google? Protobuf/GRPC, Flatbuffers, FIDL, more I can't think of?


Each serves a different purpose and makes different trade-offs as a result. Flatbuffers is a serialization format, not an rpc solution. grpc is optimized for rpc, whereas FIDL is meant for ipc.

Also if you are trying to be comprehensive, don't forget about AIDL/Binder which is used by Android, and mojo, which is used by Chrome. Both of these are IPC, however they are not general enough to be used outside of their respective platforms.


It's there really some reason not to generalize IPC to RPC "between processes"?

To be more specific, is there really a justifiable specific of either of these applications which prevent us from having a single tool which would cover both?


Sure. In IPC, you can have shared resources such as memory. In RPC, you need to assume all parties may be on separate systems. In IPC, you may be able to make guarantees about message ordering that are more challenging with RPC. For instance, packets may be dropped over a network link and packets may arrive out of order. In IPC, you can express backpressure without the need for packet dropping, and therefore can avoid overhead of things like TCP. With IPC, links are trusted, so you can avoid encrypting messages, but that is not something you can safely assume with RPC. Because of these sorts of things, the cost of sending a message is relatively low (measured in us) compared to RPC (measured in hundreds of us), and so the cost of serialization may be of greater concern as it may start to dominate cost. As a result, you may optimize your serialization format differently. Beyond that, an IPC system may allow for advanced usage of the shared kernel which is not possible across different systems each with a separate kernel.


QNX makes no distinction between IPC and RPC. In fact, a program won't be able to tell if the other end of a message port was on the same CPU, same computer, or network (well, maybe by timing results). You didn't even need to recompile code to get that either. It just works right out of the box.


When you do this, you have to live by the combined constraints of both transports. It's not necessarily wrong, but having different mechanisms which are optimized for the specific constraints of each is also valid.


I wonder how many of those constraints actually apply to the IDL and surrounding programming model as opposed to the mere (de)serialization.

Could we have a single IDL that generates two sets of serialization code one optimized for IPC and the other for network transport?


I'll give you a more concrete experience. We have protocols designed to take advantage of shared memory to allow dma directly from the hardware to the pages backing the file you end up reading from. There are many layers/processes between my program and the driver which ultimately talks to the hardware, and being able to bypass copies for the data is useful. Now the protocol encodes this expectation of shared memory usage within it. We later built a mechanism to take existing protocols and use them across network boundaries, but were unable to utilize any protocols which relied on shared memory. It may be possible to emulate shared memory across the network boundary, but it's hard to do so performantly in all cases. So rather than modify the existing protocol to avoid shared memory, negatively affecting existing use cases, we opted to create a second protocol which was optimized for network use cases.

There are more of these sorts of examples I could enumerate if you find it worthwhile.


COM, .NET Remote and RMI do it.


It should be possible to encode those constraints as part of the language. For example, the distinction between sync and async calls can be represented by having a Future type and wrapping the return type in it.


Sync and async are all about cooperatively yielding control flow. However, in many cases, you may want to yield control on IPC, or to hold onto control on RPC. Yielding control depends on the sender logic. IPC/RPC depends on receiver characteristics. They are very orthogonal concepts.


QNX is able to do this because QNX doesn't understand shared memory. Everything is a message/channel.


I'm struggling to see why you'd want a specialised explicit IDL for IPC-only, as against it being an implementation detail using same address-space optimisation if services are co-located. E.g. cf OmniORB (CORBA) [1]

While you can have tightly-coupled, highly interactive IPC that would be pathological over RPC, where DSL/IDL and protocol semantics rightly focus on more loosly-coupled service interfaces with network error recovery, I question whether it is desirable even in the IPC case. Intra-service IPC within a single runtime can be as chatty as needed without becoming an Inter-service call (and potentially RPC in other architectures), so does Fuschia answer to an actual need - is there actually a valid use case for tight-coupled highly interactive local inter-service IPC that isn't better architected assuming RPC? Is the benefit of assumed IPC between co-located client/servers an actual problem that 'colocation optimised' RPC doesn't already solve well enough?

[1] https://www.omniorb-support.com/pipermail/omniorb-list/2006-...


RPCs aren't just slower for IPC, they are also missing very useful features. Like it's not possible to send an open file descriptor across RPC, but it's very possible (and very useful) to do the same over IPC. It's how something like an "open file dialog" can work without needing the app process to have full access to all the user's files.


Well, thinking around why you'd pass around a local FD among some processes via IPC, what are the typical use cases? As a source/sink in a chain of piped processes? the connections are the pipes, not the file except at the ends. Perhaps several services cooperating to differentially process parts of a file? you'd probably architect that as a FileService that would wrap the FD and trigger events or stream to individual processing services, and here there's no fundamental difference if it runs RPC or IPC (although IPC throughput can be optimised of course, but done transparently by the ORB.) Whereas an FD-passing IPC solution is locked to local processing anyway (and only vertical scalability), and if you're already paying the price to write IDL for that, isn't the RPC/IPC style solution above both more flexible (allows hz scale), not significantly more complex or locking you into yet another IDL ?


Eh? I literally gave an example - asking the user to pick a file or open a hardware resource (audio stream, for example, or USB device). The trusted system process shows the file picker & opens the file, and then just gives the requesting app that open FD.

It's a pretty straightforward & extendable way to do permission handling, rather than needing the kernel to do all of that natively. Whereas if you go the RPC route, then you're into not just a monolithic kernel but a larger-than-ever monolithic kernel as it's now doing permission management, too.


> Like it's not possible to send an open file descriptor across RPC, but it's very possible (and very useful) to do the same over IPC.

Really? I know you can use SCM_RIGHTS on a socket, if that is what you mean. In my mind IPC means shared mmap'd memory, though


Android's Binder driver can send arbitrary FDs as can unix domain sockets via sendmsg. SCM_RIGHTS isn't limited to sending sockets, it can send any FD (open file, open socket, a memfd shared memory buffer, handle to a hardware resource, etc...)


There is also the problem of binary layout. RPC values on binary size, and IPC values on latency with zero ser/des. The two binary layouts are simply incompatible. You can do zero copy with RPC, but that does not help with ser/des. FIDL uses a full uint64 just for a discriminated union tag, which is just crazy for any protocol that is going through a cable.


if it's IPC only then it avoids the ntohl conversion between different endian machines, or alignments, another one would be shared memory map, and special handling for os objects ("handles", "events", etc.).


It's a surprisingly readable doc, check it out [0].

Compared to protobufs, it's much more concerned with being more fixed width, word aligned, efficient copy-free in-memory accessable. Proto3 spends a lot of effort compressing ints on one hand but remaining flexible and allowing fields to be optional on the other.

On top of all that, there's a good deal that's specific to it's kernel, Zircon, it's permission/capabilities model, and how these messages interact with their syscalls.

I'd imagine these kinda details are going to matter a lot for a microkernel.

I don't know it as deeply but I'd say it's much closer to cap'n proto. Not sure who uses that though!

[0]: https://fuchsia.dev/fuchsia-src/reference/fidl/language/wire...


> Also if you are trying to be comprehensive, don't forget about AIDL/Binder which is used by Android, [...]. Both of these are IPC, however they are not general enough to be used outside of their respective platforms.

To be fair, Binder originates at Palm[1] and seems to have been originally intended to be a full cross-platform replacement for COM, including DCOM (intended but unimplemented) and OLE (apparently implemented but unreleased[2]).

[1] http://www.angryredplanet.com/~hackbod/openbinder/

[2] https://www.osnews.com/story/13674/introduction-to-openbinde...


FlatBuffers is more suited to IPC than any other purpose.


It was originally designed as a game asset storage format[1], wasn’t it? Which is why the reference implementation assumes trusted input by default[2], for example, unlike Protobufs, Cap’n Proto, or SBE.

[1] https://opensource.googleblog.com/2014/06/flatbuffers-memory...

[2] https://google.github.io/flatbuffers/flatbuffers_guide_use_c...


I've primarily used it as the serialization format for choice for encoding large files written to disk. The ability to mmap it and not need to read the entire file to access specific data was very useful. My understanding is that this sort of usecase is how it's typically used.


You forgot AIDL, used for Android IPC between apps, and for its pseudo microkernel like functionality introduced in Project Treble.


A company that never encourages or rewards fixing things, but does reward making new things is bound to produce many versions of the same thing. How many chat systems does google have? And how many dozens more have they already killed?


“No man ever steps in the same river twice. For it's not the same river and he's not the same man.”

There can be a lot of value in rebuilding the same things from the ground up. There'll always be differences and novel things that people earlier hadn't seen. Of course it's costly and disruptive because you kill a lot of stuff, but especially in software building new things is relatively fast.

People will always complain about companies that kill a lot of products or software but at the end of the day they get more chances to build things from the ground up.


But it's not fast for the people who follow. That's the problem.

If you create a product then there is an external cost borne by the users who need to use it. And then if you replace that product with some other product, those users need to retrain.

It's really easy to create a new product, but it is a commitment for someone else to adopt it. And just because the cost is externalised, doesn't mean it doesn't exist.


> There can be a lot of value in rebuilding the same things from the ground up.

Maybe there is value for managers (who got promotions) and programmers (who got promotions and learned something), but for Google there is no value at all - it losts its users.

I remember everyone using gmail chat.. that they "remade" (killed) like two times.

Everyone just moved on.

Also I dont understand why investors arent unhappy with that. Google can still earn hundreds of millions on chat. Those wont be billions, but it is still money. And "hudred million here, hundred million there" start to add up.


Protobuf and FIDL have very different design goals though. One is meant for backwards compatible over the wire communication between loosely coupled RPC systems, and one is meant to be a highly efficient IPC.

I expect, for example, that it is vastly more efficient to send a known datastructure over the wire in FIDL, but it is vastly more efficient to parse, modify/extend a repeated subfield, and then reserialize a protobuf.

You can't "fix" a system to achieve two goals that are in conflict, sometimes it is in fact better to have N opinionated systems, than one that is only okay at everything.


FIDL actually does make backwards compatibility a core concern. Being able to modify an existing type and allowing client and servers to continue to work when out of sync is achieved through a similar mechanism by which protobuf achieves it - a data type in which all members are optional.

That said, there are certainly many design decisions that FIDL does do differently because the constraints of the problems it tries to solve are different as you mention.


Skimming the docs, it looks like the heavyweight use use of "tables", much like Flatbuffers, seems to be the only mechanism for forward evolution of a protocol.

Looks like "structs" you can only remove boxed fields by setting their references to null, but not add fields.


Other types such as unions, enums, and bits also have properties that allow for evolution: https://fuchsia.dev/fuchsia-src/development/languages/fidl/g...


This difference in focus doesn't stop various teams at Google from forcing gRPC/Protobuf into use in places where an IPC system like Mojo/FIDL or even a simple wire protocol (JSON over WebSocket say) would be more appropriate. I know of at least 2 HW/embedded type projects at Google where major engineering had to be done for months because an upstream team refused to use anything but gRPC.


Mojo and FIDL aren't necessarily portable outside of their respective platforms. grpc may have unnecessary overhead when used for IPC, but it's well supported, understand, and portable. I don't claim to understand the full constraints the teams you refer to were under, but I'd like you to consider that they may have done what they thought was best.


Why would you assume I didn't consider?


Are old applications going to run on new kernels? If so, backwards compatibility is a primary goal of an IPC dialog.


> How many chat systems does google have?

* Google+ (rip)

* Google Allo

* Google Chat

* Google Groups (separate from Google's Usenet service)

* Google Hangouts

* Google Messenger

* Google Duo

* SMS via Google Project Fi

* Google Voice

> And how many dozens more have they already killed?

Too many: https://killedbygoogle.com/


Listing the messaging App here is like comparing apples to oranges, gRPC and FIDL are for different purposes from technical perspective, people must have evaluated leveraging existing tech vs. creating a new one before the investment.


I've been only for 2-3 years ato google, butit felt like (Ads-team perspective) that protobuf is the building block in google. Some databases can hold as data as protobufs, and then convert these row oriented beasts into multi-million column databases with each protobuf leaf being separate column (!) such that analysts/linguists/statisticians and other scientists can lookup their data quickly (given some latency of minutes to hours).


You listed a number of services that are closed or have been merged together to be fair. I'm also not sure what the point of listing sms from a phone service is.


You're missing the chat features of Google Maps, Google Pay, and Google Photos...maybe a few others I'm missing?


Don't forget the built-in chat in google docs that nobody uses! (Yes, the one other than the commenting system).


Oh my goodness, TIL. I know what I'll be doing tomorrow at work...


YouTube had a chat feature at one point recently.


You are not contributing to the discussion. your comment is irrelevant to the subject.


You missed Mojo, the IPC system in chromium/ChromeOS.

Except there's also an older legacy system still used in spots, though it doesn't have an IDL it's more defined in C++ macros.


I really hope they acknowledge Fuchsia in some way or another in IO 2022. It's taking really long...


How long would you expect an entirely new and modern OS to take?


I don't know how challenging is this vs. let's say the first NT iteration or the switch to XNU and OSX at Apple, but I'd say Fuchsia is taking waaaay longer.

I've read somewhere that Fuchsia's UI is being developed behind closed doors, so it may be in a more ready state than what we believe. I wonder if Starnix and ART are already functional.


Over the new CPU generations, things have gotten more verbose to handle. For Intel, 16-bit real mode was pretty simple to write code for, and build a system on. But then we needed more RAM, and added memory security from random processes, so then 32-bit protected mode became a thing. And then we needed more RAM again, amongst other features, so then a slew of improvements came with 64-bit, along with all the edge cases and complications that introduces.

So hardware has gotten more complex and featureful, which requires more effort and code to handle. There is a variety of popular hardware to support (ARMv7, ARM64, x86, amd64), not to mention network cards, and peripherals. There are so many new protocols and standards to support nowadays.

Basically, implementing an OS from scratch seems to get progressively harder and harder as the years go by. The funny thing is, despite how time consuming and difficult it is to get right, web browsers have become so bloated that they're basically just as hard to to implement from scratch.


For Google? I do not expect a lengthy implementation period.


Why not? They don't have a magical ability to do things faster. You can't throw 10,000 engineers at it and finish in a week.


Streams are conspicuously absent. I guess you'd just implement some kind of enumerator yourself?


Correct. You can make a request which includes another channel which can be used to stream the results of the original request. You can see guidance on how to do that document here: https://fuchsia.dev/fuchsia-src/development/api/fidl?hl=en#p...


Odd they reference optional types (note it seems to be a type-level optional rather than field-level) and use this `:optional` syntax. But I don't see a section that explicitly describes what optional is. Or how it's recommended to integrate with different languages.

I'm very happy to see the concept does crop up somewhere. It's always a shame to see IDLs that don't leave much room for languages to leverage null-safety features. Especially ones that don't make the distinction between the concept of optional and default (looking at you proto3).


Help me please. Where is start point in OS? What's a main system part in fuchsia like linux kernel?



Many thanks. There's too many tutorials.


Slightly offtopic:

what is the current status of fuchsia?

In the sense, of when could one to expect to use it as a daily, stable driver?


Given that it is already running Chrome: [0] I would guess in another few years or so.

Probably will first replace the Linux kernel in ChromeOS first, then Android later.

[0] https://9to5google.com/2022/03/04/full-google-chrome-browser...


It depends on what you consider a stable daily driver. Google rolled fushia out to Google Nest Home smart displays starting last August. I have one and use it daily. Is fushia a daily stable driver for me?


Interesting, I didn't know that. But I think you know what I meant. A stable OS for my smartphone and laptop.


It's not really practical if you need more than just a browser and command line right now to run a pure Fuchsia system. But there's dahliaOS which is a sort of hybrid Fuchsia/Linux that I haven't used but is supposed to be pretty practical.

https://dahliaos.io/


"It's not really practical if you need more than just a browser and command line right now to run a pure Fuchsia system."

Well, in ChromeOS I do not really use (or have) much more, so I could probably actually work with that, if all is stable. But I was just curious and maybe look into dahliaOS, if I find the time.


For smartphone, Android is way bigger and harder to replace so I don't think it will come any time soon but ChromeOS might be a good candidate for kernel and core components replacement. It looks like they've made Chrome running on Fuchsia, an absolute minimum requirement for Chromebook.

https://9to5google.com/2022/03/04/full-google-chrome-browser...


I wonder why their own protobuf of flatbuffers didn't fit the bill.


I am disappointed that this IPC mechanism doesn't allow optimization across boundaries.

For example, imagine a function call in C++, when compiling with all the optimization on. If a parameter is fixed or unused, the compiler can remove it and the associated memory location and code.

But in Fuchsia IDL, there is no opportunity to do similar things. The client must make a fully formed request, and the server must make a fully formed response.


Hmm, since the Fuchsia IDL is for IPC how would the two programs communicating know what field the other chose to omit?

They'd need to encode present/absent for every single field? Too many optional fields and you end up encoding variable lengths over and over too.

They go through great lengths to define alignment, how address space specific pointers are meant to be encoded/decoded in-place, field ordering etc. They seem like opposite ends of the tradeoff spectrum to me.

Protobufs drifted towards maximizing optionals, this seems geared towards minimizing them.


You'd have optimisation that happened at the 'connect' phase, which analyzed both binaries, and metadata left by the compiler in each, and optimized the machine code by stripping out unused fields and all the code to generate them.

It could also 'inline' some IPC calls by moving server code to the client.


How do expect to ever manage to do that if server and client aren't even guaranteed to be the same language?

COM is one of the most battle tested IPC in production, with support for in-process shared libraries, and Microsoft never was crazy enough to try such optimizations on the Windows dynamic linker when loading COM libraries.


That's a really cool concept I hadn't considered. But ... Too futuristic maybe? I think this operates somewhat closer to a syscall/ABI than a calling convention/FFI than you'd expect.


I don't think this is a valid optimisation problem: look at how the zero value is optimised away in most serialisation formats (proto, rencode, etc). It will use very few bits of information.


> For example, imagine a function call in C++, when compiling with all the optimization on. If a parameter is fixed or unused, the compiler can remove it and the associated memory location and code.

You would have the same concern with any kind of dynamic linking. It's nothing new, it simply allows for forward compatibility with future versions of the external code.


What is the FIDL "string" type?

Please someone tell me they've learned from the decades of C/C++ gotchas and not made them arbitrary-length null-terminated sequences of bytes...


They are variable length utf-8 encoded character sequences with a specified length. They are notably not null terminated: https://fuchsia.dev/fuchsia-src/reference/fidl/language/lang...


COM already learned that lesson 25 years ago, they have a length field + null terminator for easily giving them directly to C like APIs, and they needed to provide easy interop between VB and C++.

However I wouldn't advise doing that direcly unless you trust the callee.


this is basically grpc with a different name


That doesn't seem to be the case at all. There are many many differences.


"Fuchsia is an open source effort to create a production-grade operating system that prioritizes security, updatability, and performance. Fuchsia is a foundation for developers to create long-lasting products and experiences across a broad range of devices. "

"Security", "Updatability" ... "Long-Lasting". You'll forgive me google, if I don't believe that these are your true goals. History suggests otherwise. What I'd really like is the list of the ways you're planning to use this to increase how much data you collect on people.


Those claims may very well be true but they are not the reason Fuchsia exists. And I do not believe data collection is either. They can already collect everything they want on Android and Chrome OS. I believe the real purpose is getting rid of Linux because Linux is not something they can control and because Linux is GPL. And instead of leveraging the GPL to push OEMs to open source and mainline drivers, Google would rather side with the OEMs. For Google, Fuchsia allows for longer support (in order to compete with Apple) while also allowing the OEMs to keep drivers proprietary and never update them.


While I'm sure that Google is writing Fuchsia in part to move away from the GPL, there are very good technical goals that Fuchsia accomplishes that Linux does not. Because Fuchsia is based around a microkernel, it offers much better security guarantees than Linux ever could. Almost everything from device drivers to the TCP/IP stack is a "normal" process in Fuchsia. If there's a security vulnerability in the TCP/IP stack or in a device driver, even if it's an arbitrary code execution vulnerability, it's much harder for an attacker to move inside the system. If you constrast that with Linux, if you compromise _any_ subsystem or _any_ device driver, you're in. And this is an inherent fault/trade-off when you select a monolithic design like the Linux kernel does.


This is basically what Project Treble made to Android, standard Linux drivers are considered "legacy" from Android point of view, and as of Android 8 all new drivers are their own process using Android IPC and hardware buffer handles.


Which is a much better approach.

Having driver APIs is a good thing. Drivers belong in userspace, where they can get the benefits of compartmentalization.


This is pretty spot on, though I am not sure that Google has the leverage to push the mobile phone industry in that direction. The last 15 years have shown us time and again that the industry as a whole (mobile phones but also IoT) is not going to open source driver code and it is not going to maintain driver code. The result is that there will continue to be pile of obsolete Android phones that are less than two years old. Even Google only commits to ~3years support for their Android phones and they can afford to not make a profit on their phones. Cutting down on this waste strikes me as a serious win.


Leverage? They're basically the only game in town. Who else is going to write a modern mobile OS? Microsoft?


It is less leverage than you might think. Samsung once threatened to switch their Phones to Tizen, Huawei is building out Harmony OS and all the Chinese OEM’s make hard forks of Android for that market since Google Play Services are illegal in China. A big part of why you would use Android is the price (free) and convenience (already built). Once you need to pay for it or put in extra development cycles the choice is less clear. Google really can’t just bend OEMs to its Will like that.


> Huawei is building out Harmony OS

Ah yes, the Android reskin (see: https://arstechnica.com/gadgets/2021/02/harmonyos-hands-on-h...)

> Samsung once threatened to switch their Phones to Tizen

An empty threat -- Samsung knows that no one would buy their phones based on Samsung specific software selling points like Bixby™


OEMs can just carry on using the last Android fork before Fuchsia's switch.


Can they? I don't think so.

Putting aside their technical capability to do so, which is not a small issue, all consumer android phones must be certified by google to be able to install play services. If google mandates they need to use Fushia for it they will do so or get out the market.


Yes, Huawei and Amazon as examples of why Google doesn't matter.


They are perfect examples; no one outside of China wanted to buy Huawei devices once they became de-Googled, and Amazon's devices only sell well because they're so incredibly cheap. Remember that time Amazon tried to release a "premium" phone?


I can assure you most Europeans don't have issues with de-Googled Huawei devices.

Regardless of the reason why Amazon Fire Tablets get sold, they are sold.


LoL try to use a banking app on a de-googled phone.


Fuchsia is Google's not invented here syndrome expressed in operating system form. There's a pattern of behavior where nothing invented/controlled outside of Google is ever good enough for Google. So, they fixed C/C++ by inventing Go. They fixed cross platform UIs by inventing Flutter. They've been trying to fix chat protocols in a weird time loop that results in a new chat product about every year, when Webkit became a success at Apple, they forked it and created Chrome. With each of those the (better) alternative could have been working with open source communities to improve things. And of course they often do. But their reflex is to internalize things and make them Google things.

Their motive is very simple: control. Mostly they actually do a good job because they just have a lot of skilled people. Go is a nice language, people seem to like Flutter (even if I don't), Chrome is arguably the most popular browser by far, etc. So, from Google's perspective this probably is a proven strategy.

With Linux vs Fuchsia, their motive is crystal clear. Linux is kind of high maintenance in the sense of cat herding its community in any kind of direction is somewhat of a challenge. That's Linus Torvald's job and he isn't a Googler and fiercely independent and hard to manage. Google soft forked years ago with Android and the Android kernel is way behind what happens in the Linux kernel and has lots of custom patches. Fuchsia is an attempt to break away from that and have something that is easier to deal with technically, without GPL and other IP issues, and something they control. It's the same reason Apple and Microsoft don't ship operating systems based on Linux (yet, MS came close a few times): they like having control.

In my view the challenges with Fuchsia are actually non technical and related to Google's control over it. That is not going to sit well with OEMs that have already had to deal with Google's level of control over Android. Are the likes of Samsung, Huawei, etc. going to jump on the band wagon? I think that they won't be in a hurry to do that. I think that's one of the reasons why Google is taking their sweet time getting to market with this with a more serious product than the Nest stuff they did a while back. Maybe I'm wrong and the next Pixel phone is going to be Fuchsia based. Or maybe that will never happen.


> Google soft forked years ago with Android and the Android kernel is way behind what happens in the Linux kernel and has lots of custom patches.

That's not really true at all? Upstream finally took a lot of the stuff they objected to almost a decade ago (like wakelocks & binder), and since then Google has reduced the amount of forks not increased them (like contributing F_SEAL_FUTURE_WRITE to memfd to allow memfd to be a viable replacement for ashmem: https://lwn.net/Articles/768785/ )

And then the most recent biggest "downstream patches" for Linux where around the scheduler, and that's all been upstreamed, too (and again quite a while ago), in the form of EAS: https://developer.arm.com/tools-and-software/open-source-sof...


> Maybe I'm wrong and the next Pixel phone is going to be Fuchsia based. Or maybe that will never happen.

I bet it will happen.

I also bet during this decade and the next that at least one Pixel phone will be Fuchsia based and ChromeOS will also be Fuchsia based.

Google's intentions with Fuchsia are absolutely clear.


> That is not going to sit well with OEMs that have already had to deal with Google's level of control over Android. Are the likes of Samsung, Huawei, etc. going to jump on the bandwagon? I think that they won't be in a hurry to do that. I think that's one of the reasons why Google is taking their sweet time

It happened before also. At one point Samsung was suppose to smother every android native app with their versions and alongside develop new OS (Tizen). And in intervening years Samsung or others have not proven that they can create large, quality software projects and ecosystems. So Google really have to do no more than reduce staffing for Android if and when they like to promote Fuchsia over it. All these phone makers are not gonna pick up slack and start maintaining.

Also the likes of Samsung are really vulnerable to low cost manufacturers from China. So challenging Google on low level technical layer as far as business goes is hardly wise.

One can argue vendors can just fork whatever "last/latest" release of Android and go their path. This indeed can work for few years. But with Apple/Google marketing of new experiences and features only supported by new OS this will be tough. These small vendors can't hire top software engineers who are used to Google level salaries to help develop features which otherwise estranged partner would have done.


"Are the likes of Samsung, Huawei, etc. going to jump on the band wagon?"

Samsung is already a fushia contributer, so they are to some degree considering jumping on board.


Also moving away from the Linux kernel will make even harder to find support for the hardware outside of Fuchsia, not to mention no way to repurpose old devices. If Google really had good intentions beside profits they could promise to open bootloaders and documentation of devices after they're declared obsolete, which wouldn't impact in any way their business, but I don't hold my breath on that.


An alternative perspective would argue that an Android device without a boot breach is already as locked down as it can be, and that the same breach on Fuchsia might give a much easier path to mix and match unchanged components and custom implementations.

I don't know which argument carries more weight. Should we expect Fuchsia to take a defense in depth approach to component authentication?


Since Fushia (the OS) is open source I wouldn't be surprised if the community builds a compatibility layer for the binary drivers to run in Linux. It wouldn't be a good thing to end up there though.


Haven't they put a lot of work into updatability with android which appears to be slowly paying off?


Inspect the source then SMH.


[flagged]


Feel free to look over: https://news.ycombinator.com/newsguidelines.html

In particular: Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken.

It's a post about an IDL, a fairly technical domain, for an open source project that's run by Google, sure, but also an incredibly ambitious technical projet.

It's literally a post about binary formats. What's there to defend or attack? Oh, just a bunch of side comments not about binary formats or IPC? Downvote and move on.

Disclosure: Never worked for Google


My bad! I made a mistake. The sequence of comments I saw (when there were around 40) looked uncanny, that's why I commented that way.


Why did they reinvent Protos again?


Presumably because protos are designed around RPCs while FIDL is designed for IPC and different trade offs make sense in each case.


To get promoted, of course. Perf is just around the corner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: