Why do you say that? Their solution of using shared memory (structured as a ring buffer) sounds perfect for their use case. Bonus points for using Rust to do it. How would you do it?
Edit: I guess perhaps you're saying that they don't know all the networking configuration knobs they could exercise, and that's probably true. However, they landed on a more optimal solution that avoided networking altogether, so they no longer had any need to research network configuration. I'd say they made the right choice.
This is because reading how they came up with the solution it is clear they have little understanding how low level stuff works. For example, they surprised by the amount of data, that TCP packets are not the same as application level packets or frames, etc.
As for ring buffer design I’m not sure I understand their solution. Article mentions media encoder runs in a separate process. Chromium threads live in their processes (afaik) as well. But ring buffer requirement says “lock free” which only make sense inside a single process.
> But ring buffer requirement says “lock free” which only make sense inside a single process.
No, "lock free" is a thing that's nice to have when you've got two threads accessing the same memory. It doesn't matter if those two threads are in the same process or it's two different processes accessing the same memory. It's almost certainly two different processes in this case, and the shared memory is probably memory mapped file.
Whatever it is, the shared memory approach is going to be much faster using the kernel to ship the data between the two processes. Via the kernel means two copies, and probably two syscalls as well.
I understand you can setup a data structure in shared memory and use lock free instructions to access it. However, I have never seen this is done in practice due to complexity. One particularly complicated scenario that comes to mind is dealing with unexpected process failures. This is quite different to dealing with exceptions in a thread.
"Lock-free" does not in any way imply a single process. Quite the opposite. We don't call single-thread code lock-free because all single-thread code is lock free by definition. You kind of can't use locks at all in this context, so it makes no sense to describe it as lock-free. This is like gluten-free water, complete nonsense.
Lock-free code is designed for concurrent access, but using some clever tricks to handle synchronization between processes without actually invoking a lock. Lock-free explicitly means parallel.
Edit: I guess perhaps you're saying that they don't know all the networking configuration knobs they could exercise, and that's probably true. However, they landed on a more optimal solution that avoided networking altogether, so they no longer had any need to research network configuration. I'd say they made the right choice.