Hacker Newsnew | past | comments | ask | show | jobs | submit | brundolf's commentslogin

Yeah. I think I've only ever found one situation where offloading work to a worker saved more time than was lost through serializing/deserializing. Doing heavy work often means working with a huge set of data- which means the cost of passing that data via messages scales with the benefits of parallelizing the work.

I think the clues are all there in the MDN docs for web workers. Having a worker act as a forward proxy for services; you send it a URL, it decides if it needs to make a network request, it cooks down the response for you and sends you the condensed result.

Most tasks take more memory in the middle that at the beginning and end. And if you're sharing memory between processes that can only communicate by setting bytes, then the memory at the beginning and end represents the communication overhead. The latency.

But this is also why things like p-limit work - they pause an array of arbitrary tasks during the induction phase, before the data expands into a complex state that has to be retained in memory concurrent with all of its peers. By partially linearizing you put a clamp on peak memory usage that Promise.all(arr.map(...)) does not, not just the thundering herd fix.


I find the "killer app" right now is anything where you need to integrate information you don't already have in your brain. A new language or framework, a third-party API, etc. Something straightforward but foreign, and well-documented. You'll save so much time because Claude has already read the docs

Same

I also just prefer CC's UX. I've tried to make myself use Copilot and Roo and I just couldn't. The extra mental overhead and UI context-switching took me out of the flow. And tab completion has never felt valuable to me.

But the chat UX is so simple it doesn't take up any extra brain-cycles. It's easier to alt-tab to and from; it feels like slacking a coworker. I can have one or more terminal windows open with agents I'm managing, and still monitor/intervene in my editor as they work. Fits much nicer with my brain, and accelerates my flow instead of disrupting it

There's something starkly different for me about not having to think about exactly what context to feed to the tool, which text to highlight or tabs to open, which predefined agent to select, which IDE button to press

Just formulate my concepts and intent and then express those in words. If I need to be more precise in my words then I will be, but I stay in a concepts + words headspace. That's very important for conserving my own mental context window


Amazing that there's no mention of AI in this post. People have been trying and failing to blur this line since the beginning of computing, and the only real success story has been excel. And it's because rigid computing systems have to draw a line somewhere between user and developer, and if that line is in the wrong place, people will either get hampered or lost. And the correct threshold is different for every user and use-case

AI is going to finally be the realization of this dream. I don't think it could have happened any other way


And the main reason excel works is because it's very tangible. Referencing by cell address is obvious in ways declaring variables isn't. And the formula style is similar to the f(x) we learned in high school. And they are domain primitives instead of weird things like main() or sys.exit().

But most people don't like to program. They want to use their computer to do a task. And maybe they think of a novel way to do the task, but it's not a daily or even monthly occurrence. And even then they lack the training to fully specify their ideas (it's tedious for a reason). And from what I've seen, almost no one wants to spend the day trying to get it out a probability machine.


You're spot-on that excel managed to break through because it's one of the few kinds of rigidness that was both legible and familiar enough to the average person. But very little else is, because the average person doesn't think in precision.

> they lack the training to fully specify their ideas (it's tedious for a reason)

> almost no one wants to spend the day trying to get it out a probability machine

These are opposing statements. What AI does best is taking fuzziness or under-specification and making sense out of it. For the first time in history, computers don't need precise instruction to be useful.


> For the first time in history, computers don't need precise instruction to be useful.

They still do. What we've done mostly is extracting common meaning out of textual languages so that we can map out a prompt to an interpretation. But the issue is that a single sentence can have many and some are plainly wrong. The training data is also not so clean.

The computer is still using precise instructions. Now we only have a mapping program that's using probability and weights to match natural languages to those instructions with no limit on the gap between what it chooses and what was needed (aka error).

The standard way was to have the idea going through multiple people (or persona), each refining it until it's precise enough to be done by a computer. Yes, the final result, the program, is restrictive. But like checklist, the dependability matters more than creative flexibility. Especially in cases like "Transfer N dollars from account X to account Y". You don't want LLMs to decide it's fine for N to be negative or for N to be greater than the balance of account X.


Excel is so usable because that rigidity eliminates an entire class of bugs

No race conditions, no side effects, no confusing state to debug

Functional languages like Nix is also extremely inflexible by design but there we recognize it as a "smart" feature, not a "dumb" limitation


Excel isn't the only one; Hypercard, Filemaker Plus, askSam, Emacs, the Unix shell, SAS, R, SQL, Microsoft Access, Visual BASIC, HTML, PHP, ... and maybe even APL, RPG-II, PowerBuilder, and COBOL.

It's hard to tell what will happen with AI.


AI papers over underlying problems, it’s a very strong force to reinforce the status quo. i don’t have strong feelings about AI, but i feel like “the AI generates a terminal command line for you” is just not going to work, it only deepens the division between users who have to blindly trust it isn’t hallucinating and programmers who can check it’s correct. (and most people won’t think to use it in this way anyway because they’ve never heard of a terminal before.)


Hahahahaha


I wonder if the software developer mindset plays into this. We're really good at over-reporting all possibly-relevant information for "debugging" purposes


Feedback: the UI feels like a direct reference to VSCode, which is familiar to software developers, but not to lawyers. If you're hoping this will be adopted by lawyers, I would focus on making the UX familiar to them. Look at software that they already use, and mimic those idioms insomuch as it makes sense to do so. I would also have the base web domain link to a normal home/info page, not to the demo directly. And maybe prefill the demo with some actual content (documents/etc) so people can really see what it does and how

Good luck!


Great feedback; and I do agree. The HN link goes to the app itself because we're impatient, but there is an actual landing page most visitors hit.

I've gone back and forth on the UX idea, and while I do agree, it's important that Tritium selects for users that are going to be able to quickly adopt the newer concepts. Just simply presenting a "better Word" isn't really going to move the needle. It's really a shift in expectations. That said, I have recently backed off defaulting to dark mode to make it feel slightly more familiar.


I think software people tend to underestimate the value of superficial familiarity. By all means, adhere to your new concepts and mental model. But even things like coloring, placement of the menu bar, the icons that you use, the organization of the UI, etc can go a really long way

Think about programming languages- ones that introduce radical new concepts may still employ familiar syntax/naming to smooth the transition for newcomers. Rust mimicked C++, TypeScript extended JS, etc. These languages were made to introduce powerful new ways of thinking about code, but by appearing as similar as possible to what devs already knew, they freed up more brain cycles for people trying to adopt them. They didn't muddy their concept-space for the sake of familiarity, but they didn't introduce any more unfamiliarity than they actually needed to for the benefits they wanted to give


No attorney who is flummoxed by this UX is going to touch an AI product in any meaningful way. Making legal tools for lawyers who would otherwise be using cuneiform tablets or the dictation pool is a waste of conversation. Looking similar to the tools a seventy five year old lawyer uses is like making an F1 car that would look familiar to Jackie Stewart: yeah, it’d probably help him adapt, but not enough to be competitive with an actual car.

Dig the idea of this product, will give it a whirl tonight.

Source: attorney, former dev


It's not about being flummoxed, it's about being annoyed enough not to give it a chance

How much less adoption would Rust have gotten if it looked like OCaml instead of C++?

Its adopters are not stupid, they could have figured out alien syntax if they were already convinced of the benefit. But selling someone on an entirely new substrate for their professional work is a huge ask. You need to make it as immediately-palatable as possible, or they'll just keep on sailing without giving it a chance.


As much as I wish the world to be different, I've heard so much whining about the Ada language's not-C-likeness that I tend to agree with you.


> Rust mimicked C++

if anything, it didn't mimick /enough/


I really like the idea. I could see some of my academic collaborators use something like this because it has features typically only supported when working with plaintext. A lot of academics do not love working with LaTeX.

But I would push back a bit against the UX and it being a "better Word". It is not immediately clear from looking at the website whether you support tracking changes. If you support editing Word documents, why aren't basic editing features, like font selection, size and weight, exposed in the UI. (I am viewing it on mobile Chrome and I might have missed it because your page doesn't support pinch to Zoom.)

You don't have to make it look like Word but it must be designed to facilitate common interaction patterns needed for working with Word documents.

(If you are building it on top of VSCode you could use its multiplayer features, which could be a good sell. )


Figure out how to make it uncompromisingly productive for power users and then dumb it down, not the other way around way around


There's a couple ways to skin it. In fields where people are happy with products, i think familiarity is good. In fields where people hate the products, you get to go in tabla rasa. In this case they take advantage of the form of the interface through multisearch et al. Instead of resembling legal software, I wonder if they should resemble a court or a briefcase?


> [Clarc] is much faster and also will easily let HN run on multiple cores

This was all running on a single core??


Modern CPUs are crazy fast. 4chan was serving 4 million users with a single server, a ten year old version of PHP and like 10000 lines of spaghetti code. If you do even basic code quality, profiling and optimization you can serve a huge number of users with a fraction of a CPU core.

I/O tends to be the bottleneck (disk IOPS and throughput, network connections, IOPS and throughput). HN only serves text so that's mostly an easy problem.


I still can't wrap my head around how the conventional wisdom in the industry to work around that problem is to add even more slow network I/O dependencies.


Yo dawg, I hear you want to cache your cache of shards.


4chan is a special case, because all of its content pages are static HTML files being served by nginx that are rewritten on the server every time someone makes a post. There's nothing dynamic, everyone is served the exact same page, which makes it much easier to scale.


It's not a special case at all. 20 years ago this was standard architecture (hell, HN still caches static versions of pages for logged-out users).

No, what changed is the industry devolved into over-reliance on mountains of 'frameworks' and other garbage that no one person fully understands how it all works.

Things have gotten worse, not better.


The "this won't scale" dogma pushed by cloud providers via frameworks has actually scared people into believing they really need a lot more resources than they actually do to display information on the web.

It's really dumbfounding that most devs fell for it even as raw computing power has gotten drastically cheaper.


I was having a conversation with some younger devs about hosting websites for our photography hobbies. One was convinced hosting the photos on your own domain would bankrupt you in bandwidth costs. It's wild.


I very much enjoyed the Vercel fanboys posting their enormous bills on Twitter, and then daring people to explain how they could possibly run it on, you know, a server for anything close to the price.

I took the bait once and analyzed a $5000 bill. IIRC, it worked out to about the compute provided by an RPi 4. “OK, but what about when your site explodes in popularity?” “I dunno, take the other $4900 and buy more RPis?”


Or get a hundred Hetzner dedis


Sounds like the real web scale was all of the AWS bills we paid along the way


Static HTML and caching aren't special cases by any means, but a message board where literally nothing changes between users certainly seems like a special case, even twenty years ago. You don't need that in order to make a site run fast, of course, but that limitation certainly simplifies things.


I worked at at company near the top of https://en.wikipedia.org/wiki/List_of_the_largest_software_c... for a while. It was extremely common that web services only used about 1/20th of a CPU core's timeshare. These were dynamic web services/APIs. (We did have to allocate more CPU than that in practice to improve I/O latency, but that was to let the CPU idle to be ready to quickly react to incoming network traffic.)

This was many years ago on hardware several times slower than the current generation of servers.


Here goes all your software engineering classes. So bare it's hilarious


I wouldn't call that a special case, just using a good tool for the job.


... which, again, shows just how much power you can get out of a 10 year old server if you're not being a sucker for the "latest and greatest" resume-driven-development crap.

Just look at New Reddit, it's an insane GraphQL abomination.


Every time a dev discovers how tremendously bloated and slow modern software is, an angel gets its wings.


Modern CPUs are stupid fast when you use them the right way. You can take scale-up surprisingly far before being forced to scale out, even when that scale out is something as modest as running on multiple cores.


Based on context, you are insinuating that a discussion board like HN _can_ be hard on the CPU alone? If so, how? My guess would be _also_ be that the CPU would have little to do by itself, but that I/O would take the brunt?


Negotiating TLS handshakes is one way. But I'd imagine the rest is largely IO-bound like you said.

It still puts into perspective what a big pile of dogshit consumer software has become that stuff like this comes as a surprise. Also, the last time I checked, Let's Encrypt also ran on a single system. As did the Diablo 2 server (I love reading about these anecdotes.)

For every incremental change in HW performance, there is an order-of-magnitude regression in SW performance.


If nothing else, handling interrupts from the NIC to pull packets out of its receive buffer, though that should be usually be isolated to a couple of cores.

Also, re: I/O, the CPU usually also has to handle interrupts there, as well as whatever the application might be doing either that I/O.


> If nothing else, handling interrupts from the NIC to pull packets out of its receive buffer,

Interrupts? Interrupts? We don't need no stinking interrupts! https://docs.kernel.org/networking/napi.html#poll


Servers can also serve small text files out of memory incredibly fast.


Most apps aren’t suffering from computation. They suffer from I/O


I was going to reply that this is pretty common for web apps, e.g. NodeJS or many Python applications also do not use multi-threading, instead just spawning separate processes that run in parallel. But apparently, HN ran as 1 process on 1 core on 1 machine (https://news.ycombinator.com/item?id=5229548) O_O


HN is not really that much of a workload. Links with text only comments, each link gets a few hundred comments at most, and commenting on stories ends after they are old enough.

Probably everything that's current fits easily in RAM and the older stories are candidates for serving from a static cache.

I wouldn't say this is an astounding technical achievement so much as demonstrating that simplicity can fall out of good taste and resisting groupthink around "best practices".


I think NodeJS apps typically rely on JavaScript event-loop instead of starting new processes all the time.

Spawning new processes for every user is possible but would probabaly be less scalable than even thread-switching.


> I think NodeJS apps typically rely on JavaScript event-loop instead of starting new processes all the time.

> Spawning new processes for every user is possible but would probabaly be less scalable than even thread-switching.

I’d just like to note/clarify that there is, in fact, multi-threading happening under the hood when running Node.js. libuv, the underlying library used for creating and managing the event loops, also creates and maintains thread pools that are used for some concurrent and parallelizable tasks. The fact that JavaScript (V8 in the case of Node.js) and the main event loop are single-threaded doesn’t mean that multi-threading isn’t involved. This is a common source of confusion.


NodeJS apps usually use multiple processes, since JS event loop is limited to a single core. However, this means that you cannot share data and connection pools between them.



Yet GitHub can't show more than a dozen comments on the same page. Needing you to click "view more" to bring them in 10 at a time.

HN is an island of sanity in a sad world.


In fairness, HN wouldn't show more than what, twenty-ish thread roots at a time, requiring you to click "more" to bring in more... which could contain the same set of thread roots you'd been looking at, depending on upvote activity.

(I assume that this update has removed that HN restriction, but haven't bothered to go look to verify this assumption.)


The update appears to have come with unlimited or much higher page size. I don't think anyone has found a thread that is still split into multiple pages.


It's amazing what's possible when you don't use microservices


Text only processing is amazingly fast, as are static websites. Javascript is heavy, man.


Yes and this is just garden-variety abstraction and toolmaking, which is what programmers have done since the very beginning


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: