Hacker Newsnew | past | comments | ask | show | jobs | submit | MarkusWandel's commentslogin

Who cares if it's AI made or not. It's funny even as it is offensive to many. One thing I've always wondered, with outrage over ads like this, would this (old) ad now be considered funny or offensive?

https://youtu.be/dBqhIVyfsRg?si=WQ44V1CXuuFMGnca


And maybe restrict them to actual bikes. Not 200lb electro-motorbikes masquerading as a fatbike or narrow delivery vans.

As a frequent "acoustic" cyclist, I'm torn on this. E-bike adoption is clearly building support for bike infrastructure, so I think I'd rather deal with the occasional heavy e-bike whizzing by me if the alternative is sharing the road with cars.

I'm old enough to remember the SGI tractor-trailer demo wagon where they were demonstrating their latest, $20K+, wares that could do immersive 3D graphics that were... crude compared to this. This was sometime in the 1990s.

And by now my kids play fluidly immersive 3D games, on the web, on the kind of computers you can get for $10 off Facebook Marketplace.


I'm not a watch person, and the only reason I occasionally wear one is because it's a Garmin and I'm recording an outdoor activity with it. But while cycling, when the phone in my backpack makes a notification noise, it is kind of handy to just be able to look at the watch to see what kind of notification it is - the kind hardly worth looking at or the kind worth stopping and pulling out the phone to reply to. This particular gadget doesn't have a microphone and any app interaction on it involves a multi-button dance. But if there was a single button audio recording app, I can totally see myself using it. Especially as you get older (I'm guessing - can't interview younger me) fleeting thoughts can be awfully fleeting.

> if there was a single button audio recording app

Would the watch require 1 or 2 hands to record audio?

The ring seems easier to use while biking. Or driving.


It's easy to use a Pebble while driving. Not like you can't take one hand off the wheel briefly to hit a button.

Can't you just say "Hey siri, add a note"? I add reminders, send texts, etc. that way.

Yeah, there's lots of places you can't speak out loud bc it's disruptive to others though. Personally I set a lot of Siri reminders, but it's weird and uncomfortable to talk loudly at your phone in public spaces so I can only use it at home or outdoors. If the ring can follow through on the promise of being able to whisper to it, that's fairly valuable imo

You can whisper to Siri on your phone or your watch too - that’s not a ring exclusive.

Definitely, but Siri is terrible at understanding quiet speech and you have to hold your phone/watch right up to your mouth. A ring is definitely a much nicer form factor for that.

There is the small matter of a 3-4dB price difference between this ring gadget and your "Hey Siri" capable watch.

There is another way to go about it - ignoring the phone completely while enjoying biking or jogging. Or leaving it at home. Unless I am on a call duty, all my notifications can wait.

My notification volume is relatively low, but there is one kind of notification, involving a certain significant other (we have high-needs kids) that requires action. So if the phone rings or there is a text message, for example, seeing at a glance (no button presses) what number it is from definitely allows stop/continue decision. I wish I did have a life where I can leave the electronic leash at home, or at least on mute.

On iOS, and probably on Android though I haven't checked, you can choose to only be notified of messages from certain contacts (in iOS it's in Focus Modes). This may help.

Understood.

My original Pebble purchase decision was made thinking "This will be great, I can use it while commuting to see who's calling or messaging my phone, so I can decide whether to pull over to respond or not!" I had a 35-40 minute each commute where I rode a motorcycle every day back then.

Turns out, the number of times I pulled over to return a call or message was precisely never. There was nothing so important that I could do anything about it by the side of the road, or that couldn't wait half an hour till I got to work/home.


Wait.. if you need to push a button and then have to get it somewhat close to your mouth to record an audio clip, why not just have a watch app? Pressing the button is still a two-hand thing. Or did I miss something?

You put the ring on your index finger (and likely have it rotated so that the button is pointing to your thumb) and then press the button using the thumb on your same hand. That allows it to be one-handed.

The present isn't all that cute either. But if from the view of 100 years in the future, all you saw was the idealized lives of everyone as posted on social media, you'd think it was a lost, happy time too. That's how nostalgia works. You preserve the good stuff, you let the boring and crappy stuff be forgotten. At least relatively.

If the Canterbury Tales had been actually representative of the time in which they were written, it would not have been the Knight's Tale, the Miller's Tale, the Reeve's Tale, etc. It would have been the Subsistence Farmer's Tale, the Subsistence Farmer's Tale, the Subsistence Farmer's Tale, etc.

The Canterbury Tales were trying to show the viewpoints of people from a wide variety of social backgrounds.

But in this world the distribution of social backgrounds in the population had very low entropy. Over 90% of people were just working the land.

If Chaucer had just had subsistence farmer after subsistence farmer telling their tales it would be very boring. Even in such a society, there were a number of different roles at the bottom of it. Some people were shepherds or drovers rather than just scraping the soil, others were producing cloth and clothing, others were fishermen, others were midwives and so on. All these were useful at the time and used by peasants.

Yes, Malthusian society was very boring, that's my point.

Whoa, what? If the car loses its cloud connection (in this case almost literally) it bricks itself? And anyone with mechanic level knowledge can unbrick it and detach it from the cloud service?

So the average user gets the worst of both worlds. A car that can malfunction if its cloud service does and effectively no protection against thieves armed with some technical expertise.


The car is a Porsche... not justifying a shutdown of any brand, just thinking about the original obsessive designers, how they break the efforts but in mechanics.

I always say "on a scale from no canoe to a $5K canoe, even the crappiest canoe is 80% of the way there". This camera illustrates that for vision. When you hear about those visual implants that give you, say, 16x16 grayscale you think that's nothing. Yet 30x30 grayscale as seen in this video, especially with live updates and not just a still frame is... vision. Not 80% of the way there, but does punch way above its weight class in terms of usefulness.

The moment you add motion and temporal continuity, even a postage-stamp image turns into something your brain can work with

The brain really is quite a machine. I've personally had a retinal tear lasered. It's well within my peripheral vision, and the lasering of course did more damage (but prevents it from spreading). How much of this can I see? Nothing! My peripheral vision appears continuous. Probably I'd miss a motion event only visible to that eye only in that particular zone. Not to mention the enormous number of "floaters" one gets especially by my age (58). Sometimes you see them but for the most part the brain just filters them out.

Where this becomes relevant is when you consider depixellation. True blur can't be undone, but pixellation without appropriate antialiasing filtering...

https://www.youtube.com/watch?v=acKYYwcxpGk

So if your 30x30 camera has sharp square pixels with no antialiasing filter in front of the sensor, I'll bet the brain would soon learn to "run that depixellation algorithm" and just by natural motion of the camera, learn to recognize finer detail. Of course that still means training the brain to recognize 900 electrodes, which is beyond the current state of the art (but 16x16 pixels aren't and the same principle can apply there).


It would be interesting to see how far you could push that. I bet just two scanlines side-by-side would be enough for complete imaging. Maybe even just one, but that would require a lot more pre-processing and much finer control over the angle of movement. Congrats on the positive outcome of that surgery, that must have been pretty scary.

In some situations, you can trade resolution for frequency and maintain quality. 1-bit audio is a thing:

https://en.wikipedia.org/wiki/Direct_Stream_Digital


Back in the Apple ][ days, the timing for writing to the cassette port and to the speaker were identical (just poking different addresses), so you could load audio from cassette using the ROM routines to read a program from tape, then use a modified routine that wrote to the speaker address instead of the cassette port address to play back the audio at 1-bit quality. It kind of sounded like something recorded off AM radio.

I also remember a lot of experimenting with timing to try to get a simulation of polyphonic sound by trying to toggle the speaker at the zeros of sin + sin .


Diminishing returns explained through canoes :)

16x16 sounds really shit for me who still has perfect vision indeed but I bet it's life changing to be able to identify presence / absence of stuff around you and such! Yay for technology!


This kind of thing is really held back by BCI tech.

By now, we have smartphones with camera systems that beat human eyes, and SoCs powerful enough to perform whatever image processing you want them to, in real time.

But our best neural interfaces have the throughput close to that of a dial-up modem, and questionable longevity. Other technological blockers advanced in leaps and bounds, but SOTA on BCI today is not that far away from 20 years ago. Because medicine is where innovation goes to die.

It's why I'm excited for the new generation of BCIs like Neuralink. For now, they're mostly replicating the old capabilities, but with better fundamentals. But once the fundamentals - interface longevity, ease of installation, ease of adaptation - are there? We might actually get more capable, more scalable BCIs.


> Because medicine is where "move fast and break things" means people immediately die.

Fixed the typo for you.


Not moving fast and not breaking things means people die slowly and excruciatingly. Because the solutions for their medical issues were not developed in time.

Inaction has a price, you know.


It has a price for the person with the condition. For the person developing the cure it does not (except perhaps opportunity cost, money not made that could have been), whereas killing their patients can have an extremely high one.

The majority of treatments people ever thought up and think up today are somewhere between useless and terrible ideas. Whatever you think is looking so exciting right now, there have been a million other things that looked just as exciting before. They do not anymore.

We didn't come up with these rules around medical treatments out of nowhere, humanity has learned them through painful lessons.

The medical field used to operate very differently and I do not want to go back to those times.


You’re starting to sound terrifyingly like an unethical scientist. We know how that ends, we’ve been down that road before, and we know why it is a terrible idea.

There is a lot of space between “persons with a debilitating condition are prohibited from choosing to take a risky treatment that might help” and “hey let’s feed these prisoners smallpox-laced cheese for a week and see what happens”.

The “no harm, ever” crowd does not have a monopoly on ethics.


A straw man argument? On my HN?

To anyone wondering:

BCI == Brain-computer interface

https://en.wikipedia.org/wiki/Brain–computer_interface


mind reading technology has already arrived. radiomyography & neural networks deciphering EEGs

Not really. Non-invasive interfaces don't have the resolution. Can't make an omelet without cracking open a few skulls.

they do read my mind at least to some extent -> "The paper concludes that it is possible to detect changes in the thickness and the properties of the muscle solely by evaluating the reflection coefficient of an antenna structure." https://ieeexplore.ieee.org/document/6711930

it is a good ilustration of something like moores law, for a comming end point where a hand held device will have more than enough cabability and capacity to do ANYTHING, a meer mortal will EVER require, and the death of options and features, and a return to personal autonomy and responsibility

AI is the final failure of "intermitent" wipers,which like my latest car, is irevocably enabled to smeer the road grime and imperceptable "rain" into a goo, blocking by ability to see


On my car the automatic features - automatic headlight on/off, automatic high beam, automatic lock when you walk away etc. - work well enough to be a plus (except when you switch to the other, older car that doesn't have those features).

But the cruise control. You can not disable the adaptive part. It's either adaptive cruise control or nothing. And at night, the camera-based system sometimes gets confused in the mess of over-bright headlights coming the other way and brakes for no reason. So you end up not being able to use cruise control at all.

I guess the reason they make it that way is exactly the habit problem. You don't want people crashing into the car in front because they forgot that the cruise is not in adaptive mode.

Automatic wipers would be great (if optional to enable!) After all that same camera that makes all the other wizardry work can see rain drops on the windshield in front of it. So why don't the make that feature?


True. Then we cross a threshold where things that weren't even thought as possible become reachable, and we're back on the treadmill.

That's what we're having with VR: we came to a point where increasing DPI for laptop or phone seemed to make no sense; but that was also the point where VR started to be reachable, and over there a 300DPI screen is crude and we'd really want 3x that pixel density.


use the washer button to spray the windshield with water and help the goo wipe off

yes, obviously, but my point is that I am now tasked with helping the "feature" limp along, whenever it lurches, unexpectedly, into action, therby ADDING distraction which if you read the ancient myths and legends is one of the main methods that evil spirits and deamons undermine and defeat the unwary....and lull them into becoming possesed, hosts, for said entities.

who's working for who here anyway?

already?


On what car can you not disable auto wipers?

All the categories on the right side look perfectly reasonable. However what fraction of those big categories on the right, Hospitals and Physicians... that make up over half of the total, is siphoned off to pay for administratium or "shareholder value" and what fraction actually pays for medical care delivered?

I'd really love to see the breakdown between how much we spend on physicians/doctors vs. caretakers (nurses, therapists, etc.) vs. how much on hospital admin and other stuff.

At least in UK's chart, "GP & Primary Care", "Private GP Services" and "Administration" are separated. Same in Germany too.


If by shareholder value you include insurance companies etc not just the institutions themselves, it’s well over half.

Doctor time talking to an insurance company either directly or through paperwork is not actually providing any care during that time. Where things go vicious is because doctors are now so inefficient the time they are actually useful becomes increasingly valuable driving ever more paperwork to justify that time.


Physicians spend very little time directly dealing with insurance companies. They (or their employers) hire back office administrative staff for that. This is somewhat wasteful and inefficient but it doesn't directly impact the time that clinicians have available for patient care. Usually the physicians only have to get directly involved in rare peer-to-peer consults with health plans for complex cases that fall outside of normal clinical practice guidelines.

It’s the indirect time that’s most at issue here, a friend of mine is happy to take a significant pay cut and work at the VA to avoid the hassle. It’s a government institution and he still feels way more productive and less stressed.

In private practice a physician now needs to handle extra employee(s), there’s often a range of software system issues etc. Even in a hospital setting where other people are handling most of this stuff that’s a long way from zero friction.

The insidious issue is even a 1% drop on doctor efficiency gets magnified by everyone taking a cut of the transaction.


Yes, the diagrams are deeply flawed, in that they seem to suggest 100% of the money input to the system goes to hospitals, hospices, healthworkers, and so on.

I don't see a single outcome pointed at insurance companies... somehow.


The article takes this on a couple times:

> The outcome is $4.9T - which would make it the 3rd largest economy in the world, a high 8% admin costs - compared to the UK’s 2% admin, with medical bankruptcy still possible. We’ve never agreed on what we value. So we built a system that embodies our disagreement: employer-based coverage (market choice) plus Medicare (social insurance) plus Medicaid (safety net) plus exchanges (regulated markets).

> Decision #1: Workers pay at least twice

Here’s the first thing that jumps out: if you work a job in America (and you presumably do, to afford the internet where you’re reading this), you’re already paying for healthcare in multiple places on this chart:

    Taxes: federal, state, and local taxes finance Medicare, Medicaid, and various public health programs in so many places. Our attempt at embedding it in single payer.

    Payroll: if you’re employed, your employer pays taxes on Medicare (even though you presumably can’t use it until you retire at 65). This is a cost that doesn’t go to your salary.

    Insurance premiums: get deducted from your paycheck to fund the employer group plans ($688B from employees alone).
> Could America make this choice? Technically, yes. Politically, we’d need to agree that healthcare is a right we owe each other, funded collectively through taxes. That would mean massive tax increases, eliminating private insurance as the primary system, and trusting a single federal agency.

The operational resistance alone would be too much: I’ve watched hospital execs squeeze out thinning margins and payer executives navigate quarterly earnings calls. We’re talking about unwinding a $1T+ private insurance industry, reconfiguring every hospital’s revenue model, and convincing Americans to trust the federal government with something they currently (sort of) get through their jobs. That ship didn’t just sail - it sank decades ago.


We've unwound industries before - if we have the political will we can do amazing things.

But the people in and using those industries have no desire to change so anything that does happen is likely to occur slowly from expansion - e.g, bringing Medicare to earlier and more people, and expand children coverage, etc.


I think there's some significant evidence that "users" of the private healthcare industry are actually quite unhappy about it. Due to some structural quirks of the US, it seems like it's less about how many people are happy or unhappy and more about how many dollars are spent in favor or opposition of a change.

Those are probably the easiest levels for simplification, but I'd expect significant pushback anyway. Something like start medicare at 55 would be a huge difference to most providers, just due to changes in reimbursement.

There's always pushback - the key is to identify who will push back, what their motivations are, and if you can do something to "disarm" them.

Nothing wrong with this. Some applications really are compute bound and don't need much RAM, such as a homemade surveillance camera system I have, presently running on a couple of Raspi 4s. Suppose I wanted to upgrade to Raspi 5, why spend extra money on RAM that's not needed? These things run headless with the only GUI exposed via web server.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: