Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Trains cancelled over fake bridge collapse image (bbc.com)
239 points by josephcsible 22 hours ago | hide | past | favorite | 199 comments




I think we’re just getting started, with fake images and videos.

I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.

We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.

It’s gonna suck.

[0] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...


> We can expect videos of unpopular minorities, doing horrible things

Expect? You can post a random image of an unpopular minority, add some caption saying they did horrible things, that is not reflected in the image at all, and tons of people will pile on. Don’t even need a fake video.


What people tend to forget or dispel is that everything in society is based on trust. You can of course try to test every new fact for soundness with all the facts you already know, but fundamentally you need to trust something and in the end it just becomes a function of how much you trust the messenger. Undoing trustworthiness is a big issue and will lead to a lot of unrest in society.

The reason you are not murdered today is not, because murder is hardly punished or hard to do, it is because most people aren't murders. If they were, we wouldn't be able to suppress it with force, we would simply live in hell.


For some reason this hurts worse.

I was listening to James O’Brien on LBC, and [IIRC] he said he was serving jury duty with a woman who was convinced that Volodymyr Zelenskyy had spent hundreds of million of dollars on a super-yacht.

He asked if she had any evidence for that claim, and she produced a picture of a boat.

He said “That’s just a picture of a boat.”


Ironically, there is no evidence that woman ever said that.

There is, in fact, evidence that hundreds if not thousands of random people have said that: https://xcancel.com/KimDotcom/status/1729171832430027144

Perhaps you could even find that specific woman leaving an outraged comment over photos of boats if you looked hard enough!


Yes, but in that story, parent only has the word of that Journalist. I personally don't even have that, I only have a post about it.

My deeper point is that it's arguably very difficult to establish a global, socially acceptable lower threshold of trust. Parent's level is, apparently, the word of a famous Journalist in a radio broadcast. For some, the form of a message alone makes the message worthy of trust, and AI will mess with this so much.


Whether you trust the word of the journalist has little relation to the story. The "socially acceptable lower threshold of trust" is not static for all stories; it changes depending on the stakes of the story.

Non-consequential: A photo of a cat with a funny caption. I am likely to trust the caption by default, because the energy of doubting it is not worth the stakes. If the caption is a lie, it does nothing to change my worldview or any actions I will ever take. Nobody's life will be worse off for not having spent an hour debunking an amusing story fabricated over a cat photo.

Trivially consequential: Somebody relates a story about an anonymous, random person peddling misinformation based on photos with false captions on the internet. Whether I believe that specific random person did has no bearing on anything. The factor from the story that might influence your worldview is the knowledge that there are people in the world who are so easily swayed by false captions on photos, and that itself is a trivially verifiable fact, including other people consuming the exact photo and misinformation from the story.

More consequential: Somebody makes an accusation against a world leader. This has the potential to sway opinions of many people, feeding into political decisions and international relations. The stakes are higher. It is therefore prudent not to trust without evidence of the specific accusation at hand. Providence of evidence does also matter; not everything can be concretely proven beyond a shadow of a doubt. We should not trust people blindly, but people who have a history of telling the truth are more credible than people who have a history of lying, which can influence what evidence is sufficient to reach a socially acceptable threshold of trust.


The point about the stakes is a good one. But there is an individiual factor to it. And maybe it's exactly because of the stakes you mention: if you perceive your personal stakes to be low, or might even gain something out of redistributing the message, no matter if fabricated or not, your threshold might be low as well.

> > Trivially consequential: Somebody relates a story about an anonymous, random person peddling misinformation based on photos with false captions on the internet. Whether I believe that specific random person did has no bearing on anything.

> The point about the stakes is a good one. But there is an individiual factor to it.

Indeed. The so called "trivially consequential" depends on whether you're the person being "mis-informationed" about or not. You could be a black man with a white grandchild, and someone could then take a video your wife posted of you playing with your grandchild, and redistribute it calling you a pedophile, causing impact to your life and employment. Those consequences don't seem trivial to the people impacted.

True story: https://www.theguardian.com/world/2025/aug/20/family-in-fear...


This is a complete and total misrepresentation of what I said. The key point here is that the "accused" in the trivial story is anonymous. They are fungible. Their identity is irrelevant to the story; it is merely an anecdote about the fact that a person like this exists, and people who exhibit the exact same behaviour as them verifiably do exist, so there is nothing to be misinformed about. A tangible accusation against a specific individual is completely different, and obviously is consequential.

Who cares about a single or two Yachts. Ukraine likely made 100 billion USD disappear and there were many people expecting just that. Just like some of the "donated equipment" started showing up on all sorts of black markets once it was shipped to Ukraine. It's just the obviously controlled media in Europe that stopped mentioning Ukraine's corruption issues right after February 2022.

Obviously I can only be a Putin-loving propaganda bot for saying such things.


Corruption in Ukraine is constantly in the news. https://www.economist.com/search?q=ukraine+corruption&nav_so...

Ukraine was, and is still, one of the most corrupt developed countries in the world. Whether it is slightly more or slightly less corrupt than Russia I do not know. Both are Oligarchic in nature. In my opinion one of the reasons the various peace deals have not succeeded yet is because they fail to acknowledge the Oligarchic nature of both states and that they will both need to continue in that mode going forwards, probably as a frozen conflict or in a system where it is in neither interest to disrupt the balance (because it would end the corruption, pocket-lining, theft, etc). Of course for the ordinary unfortunate Ukrainian, well he/she matters little or not at all to rulers on either side.

Everybody is aware the Ukraine has major corruption issues. It is frequently covered in the media and is common knowledge.

I have no doubt however that Europe (and hopefully the wider world) is less worried about that corruption than they are about Russian military aggression. And there will be some level of media focus on that – rightly so, where the focus should be on grinding the Russian kleptostate into dust as quickly and thoroughly as possible.

You're not a propaganda bot; you're just making their lives easier.


Where does the corruption come from?

It comes from an old culture that Ukraine is trying to remove themselves from, hence the large amount of corruption charges we see.

The same culture is incidentally what makes Russia one of the most corrupt countries in the world.


If you're happy with your tax euros disappearing in Ukraine, good for you.

I know for a fact via family ties that major newsrooms in Germany received instructions to tune out the corruption angle once the war started. I'm sure it's all nothing though and that Putin will find himself in Poland next year. Of course!


this is true, my dad is Volodymyr Zelensky

What's your point though? There's corruption in Ukraine. Ok.

There's corruption in your country too, do you refuse to pay taxes? Or do you still pay them because some good comes with the little bad? Same deal.


If sending hundreds of billions of tax payer money to a known oligarch run cleptocracy is comparable to some German conservative party affiliate making a couple of millions using shady COVID mask deals is comparable to you, I rest my case.

It's all corruption in the end so who cares, right?


Two things can be true at the same time - we don't want Russia to absorb Ukraine and then further threaten the eastern border of the EU, and we don't want Ukraine to be corrupt.

And in the Ukraine we see that the corruption is uncovered punished, even if it is in the direct circles of the president.

There are problems in uncovering it, but the attempt to get rid of corruption is a big factor in the whole situation and one of the things Russia fears.

For Russia a corrupt system was a lot simpler to influence and Ukraine showing how a partially Russian speaking country, where people moved back and forth, fighting corruption was a threat to the system.


> to tune out the corruption angle once the war started

Oh man, wait until you hear about what’s going on in the US, we’re experiencing corruption to a degree you can’t even imagine.


Hah, Kim Dotcom is still around? In the 90s he was bragging that he's this super hacker that made millions, his website posted pics of parties, cars, girls, and yachts, and it turned out those were bought/rented using swindled investor money (ironic that he's accusing Zelensky of the same crime). Then he became a sort of hero when the US/NZ governments Team 6-ed his house for the crime of aiding copyright infringement.

Now he's a Putin/Trump apologist...


Still around and at a huge cost to the New Zealand taxpayer - people used to have some sympathy but public opinion turned against him years ago. His extradition was declared ok, long overdue he was put on a plane and made somebody else's problem.

The people of New Zealand should be mad at the illegal tactics used by the FBI and GCSB. And why should he be extradicted to a country he never visited?

In fact, this is already happening on a daily basis.

IA ain't the problem here, so called social media are.


Well, some news organizations are more than willing to spread fake news as well, so it's hardly limited to social media. I think it's just in-group vs out-group mentality, and a need to hate.

It was not happening on a daily basis on Twitter before Elon Musk. The endless flow of racism and bigotry on that website is a choice.

It's convenient to blame the amorphous thing "social media" instead of the actual people responsible. There are only a handful of them: Elon Musk, Mark Zuckerberg, etc.

And stopping it is simple. It's a choice.


It was, but it wasn't pushed into everyone's filter bubble.

In my opinion, this isn't a problem of AI. the people who get deceived by this are willing participants in the lie. When proven wrong, they will fall back to the echo chamber and rely on it to give them more false facts. They won't seek information outside of their own circle. They cannot be understood as merely passively misinformed. They are actively lying to themselves.

What you'll tend to notice with "willing participants" is that they're not looking for truth, they're looking for confirmation. No-one asks for proof when you tell them what they want to hear.

> You can post a random image of an unpopular minority, add some caption saying they did horrible things, that is not reflected in the image at all, and tons of people will pile on.

We call this journalism and this is a respectable profession. /s


So far, I see the most concern about this sort of thing from people who came of age around or after Web 2.0 hit, at a time when even a good photoshop wasn’t too hard to place as fake.

Those I know who lived through this issue when digital editing really became cheap seem to be more sanguine about it, while the younger generation on the opposite side side is some combination “whatever” or frustrated but accept that yet another of countless weird things has invade a reality that was never quite right to begin with.

The folks in between, I’d say about the 20 years from age 20 to 40, are the most annoyed though. The eye of the storm on the way to proving that cyberpunk lacked the required imagination to properly calibrate our sense of when things were going to really get insane.


In my family it's the other way around - it's the people that used to tell us not to talk to strangers on the internet, and not to believe everything we see on the internet, who are now doing precisely that.

People were able to make very realistic fakes of anything 10-20 years ago, using basic tools. Just ask the UFO nuts or the NSFW media enthusiasts. And like what you mentioned, staged scenes have become somewhat common as well, including before the internet.

We can expect more of the same. Random unverified photo and video should not be trusted, not in 2005, not in 2015, and not today.

I believe that this "everything was fine but it's going to get really bad" narrative is just yet another attempt at regulatory capture, to outlaw open-source AI. This entire fake bridge collapse might very well be a false flag to scare senile regulators.


Motivated people (nation states) were able to do this even hundred years ago. The issue is simply that most people didn't did that.

Oh heck yes. One India focused study that I saw, introduced me to the term Cheap Fakes. Another report studied how genAI made phishing pipelines more efficient, allowing profitable targeting of groups who hitherto were too poor to be targetted.

So on one end you have large scale pollution of the information commons, and on the other end we are now creating predator pipelines to generate content with all the efficiency of our vaunted AI productivity. Its creating a dark forest for normal people to navigate, driving more government efforts to bring control. This in turn puts this in conflict with freedom of speech and expression while dovetailing nicely with authoritarian tendencies.

Yes, Its heartening to hear all the people who find productivity gains from AI, but in totality it feels like we got our wishes granted by the Evil Genie.


> One India focused study that I saw, introduced me to the term Cheap Fakes

Wasn't that term entirely invented by the Democratic party to dismiss videos of Biden's "senior moments"?

I'm curious if the term predates that or maybe you're not in the US?


I heard it this year, but its a term that once you hear it, you "get it". Definitely not an India only thing.

> We can expect videos of unpopular minorities, doing horrible things

While manipulation of photos exist, and real photos misattributed are very common, for the most part a lot of that does happen as well. And some people are too quick to ignore or gloss over it


I'm sure this will be exactly the popular attitude - yes, the evidence and videos people see and form emotional reactions based on are fake, but the problem is real and so we should let it slide and just assume something exactly like the AI video happened anyway.

>We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.

AI videos of unpopular minorities already comprise an entire genre and AI political misinformation is already mainstream. I'm pretty sure every video of Donald Trump released by the WH is AI generated, to make him look less senile and frail than he really is. We're already there.


What this incident really shows is the growing gap between how easy it is to create a convincing warning and how costly it is to verify what's actually happening. Hoaxes aren't new, but generative tools make fabrication almost free and massively increase the volume.

The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.

This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.


Would calling and saying, "Hey, the bridge is destroyed!" without an image not have also triggered a delay? I question the safety standards of the railway if they would just ignore such a call after an earthquake. Generative AI doesn't change the situation at all. An image shouldn't be treated as carrying more weight than a statement, but the statement without the image would be the same in this situation. This has really been an issue since the popularization of the telephone, which made it sufficiently easy to communicate a lie from far away that someone might choose to do so for fun.

This in itself is not a big deal... but there very much scenarios that could mean life or death.

Take a fast moving wildfire with one of the paths of escape blocked. There may be other lines of escape but fake images of one of those open roads showing its blocked by fire could lead to traffic jams and eventual danger on the remaining routes of escape.


Given the number of cctv cameras that operate in the UK, and their continued growth, I am surprised that the rail operator did not have access to a direct view of the bridge. I am also a bit surprised that there isn't technology to detect rail damage, especially the power lines that runs over the track.

Where I live it is not uncommon for rail to have detection for people walking on the rail, and bridges to have extra protection against jumpers. I wouldn't be that surprised if the same system can be used to verify damage.


> Given the number of cctv cameras that operate in the UK, and their continued growth,

CCTV cameras are mostly in private ownership, those in public ownership are owned by a mass of radically different bodies who will not share access without a minimum of police involvement. Oh and of course - we rarely point the cameras at the bridges (we have so many bridges).

> Where I live it is not uncommon for rail to have detection for people walking on the rail, and bridges to have extra protection against jumpers. I wouldn't be that surprised if the same system can be used to verify damage.

This bridge just carries trains. There is no path for walking on it. Additionally jumping would be very unusual on this kind of bridge; the big suspension bridges attract that behaviour.

You mentioned twice that you are surprised by things which are quite common in the UK. I don't know where you're from, but it's worth noting that the UK has long been used as a bogeyman by American media, and this has intensified recently. You should come and visit, the pound is not so strong at the moment so you'll get a great deal to see our country.


The saying/claims in the last 20 years or so is that London has the highest ratio of cameras to people in the world, through looking at what seems more correct statistics it is only the 12th most dense camera city in the world. how well that translate to the rest of the country is much less talked about.

Here in Sweden, people walking on the rails without permission is a fairly common problem, which cause almost 4k hours of accumulated delays per year. For people who often travel by train, the announcement of reduced speed because of the system has detected people on the tracks are one of the more common ones, only second to the catch-all announcement of "signal error", which simply mean the computer says stop for a reason that the driver don't know or don't want to say.

When it comes to suicide prevention on bridges, it is not just the big bridges. Suicide by train is a fairly talked methods in the news as a work hazard for train drivers, and the protection here is for small bridges that goes above the track. Similar issues exist with bridges over roads and highways. Those methods are to my read of the statistics more common than the movie version of a person jumping from a suspension bridge.


One of the more interesting ways of detecting rail damage, and subsidence in general, is optically detecting noise / distortion in fiber optic cables. An applied case of observables which are the basis for an evaluative (the "signal") being utilized originally to diagnose possible maintenance issues and then going "hey there, wait a sec, there's a different evaluative we can produce from this exhaust and sell".

https://fibersense.com/

http://www.focus-sensors.com/


There is technology that could detect rail breaks, in the form of track circuits: feed a current into a rail, detect whether it gets to the other end (or bridge the two rails at the other end of the circuit and see if it gets back to the start of the other rail). A variation of this is commonly used in signalling systems to verify that the track is clear: if a pair of wheels is in the track section then the signal will short across the rails and make the circuit show 'occupied'.

Ultimately, though, this kind of stuff is expensive (semi-bespoke safety-critical equipment every few miles across an enormous network) and doesn't reduce all risks. Landslides don't necessarily break rails (but can cause derailments), embankments and bridges can get washed out but the track remains hanging, and lots of other failure modes.

There are definitely also systems to confirm that the power lines aren't down, but unfortunately the wires can stay up and the track be damaged or vice versa, so proving one doesn't prove the other. CCTV is probably a better bet, but that's still a truly enormous number of cameras, plus running power supplies all along the railway and ensuring a data link, plus monitoring.


This is the part that I find insane. What if the bridge had collapsed, and no one had bothered to post a picture of it to social media?

I mean, you're supposed to call the police or Network Rail: there are placards on the (remains of the) bridge with the telephone number. But yes, it's not uncommon to have to send a train to examine the line (at slow speed, able to stop within line-of-sight) after extreme weather.

Contrary to popular belief, not every single square inch of the UK is covered by state operated CCTV.

Great comment and very true in this AI world. In 2030 it will be even easier to make even more realistic images much quicker...

Reminds me of the attacker vs defender dilemma in cybersecurity - attackers just need one attack to succeed while a defender must spend resources considering and defending against all the different possibilities.


I also think not a hope, check Brandolini's law[1]: The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

[1] https://en.wikipedia.org/wiki/Brandolini%27s_law


There should be a countervailing law that the more bullshit is produced the more skeptical the populace becomes. The amount of conspiracy theorists has remained constant even with the advent of the Internet this hasn't changed.

It is cheap to have live monitoring of key infrastructure these days, and in the case of rail infrastructure it would also save time and money in general. Perhaps this hoax will push this higher up the todo list.

It may be cheap to monitor a single spot. It is extremely expensive to monitor everything.

There is a balance like always. It seems odd that they have zero cameras on bridges and other main infrastructure, although I believe that level crossings tend to have them (perhaps more to avoid liability in case of accidents, though...)

Main point is that there aren't technical difficulties in verifying the state of main infrastructure in real time (contrary to the claim of the commenter I was initially replying to), and it's more a question of priority and will than doability or cost.

It will happen but the usual way is that "it's not possible", "it's too expensive", etc until something bad enough happens and then suddenly it is doable and done.


Not a hope.

Most economic value arises from distinguishing signal from noise. All of science is distinguishing signal from noise.

Its valuable, because it is hard. It is also slow - the only way to verify something is often to have reports from someone who IS there.

The conflict arises not from verifying the easy things - searching under the illumination of street lights. Its verifying if you have a weird disease, or if people are alive in a disaster, or what is actually going on in a distant zone.

Verification is laborious. In essence, the universe is not going to open up its secrets to us, unless the effort is put in.

Content generation on the other hand, is story telling. It serves other utility functions to consumers - fulfilling emotional needs for example.

As the ratio of content to information keeps growing, or the ratio of content to verification capacity grows - we will grow increasingly overwhelmed by the situation.


You don't need AI for this kind of disruption. People have been making fake bomb threats for years. You just have to say it, either directly to the railway/etc. or publicly enough that somebody else will believe it and forward it to them. The difference might be of intent - if you say you planted a bomb on the bridge, you're probably committing a crime, but if you just post a piece of art without context, it's more plausibly deniable.

It's also pretty common in the UK for trains to be delayed just because some passenger accidentally left their bag on the platform. Not even any malicious intent. I was on a train that stopped in a tunnel for that reason once. They're just very vulnerable to any hint of danger.


AI definitely makes it easier and it will happen more often.

You don't need anything for anything. You can do war with long sticks. Turns out guns, planes, and firebombs work better.


Exactly. More is different.

an AI sees the image on social media, deploys a drone to quickly go there, looks at the live video feed, and declares all is good

Sir, this is AI prose. Wendy's doesn't allow AI prose.

Thanks for the heads-up! I actually wrote this based on my own thoughts about the incident, but I understand the concern. I'll make sure to keep my posts in line with the community guidelines.

> Network Rail said the railway line was fully reopened at around 02:00 GMT and it has urged people to "think about the serious impact it could have" before creating or sharing hoax images.

Perhaps Network Rail should have a system of asserting rail integrity that is independent of social media (?!!?)

for real, pick up the phone and ask someone (??)


1 - it's the middle of the night. It takes a while longer to find someone to go and look at the bridge.

2 - integrity checks can tell you that the bridge has definitely failed, but not that it definitely hasn't.


If I were on a train and there was even a chance that we were careering towards a collapsed bridge... I would appreciate that train stopping before we find out.

Right, they probably have an employee on site 24/7 for every piece of track they use. It’s a mystery why they didn’t think of calling them.

> or real, pick up the phone and ask someone

I mean, they did do that eventually. But if the image was convincing, then stopping the train immediately is the rational choice. Erring on the side of a small delay rather than a train disaster is the right thing to do in this situation.


I'm perfectly happy for Network Rail to prioritise customer safety. They get an unsubstantiated report from social media, so they stop services over the affected area until they can get someone to go and check. Picking up the phone wouldn't be much use as there's not teams of safety inspectors just waiting by rail bridges.

To my mind, Network Rail is blameless for this.


I’m not sure what you wanted them to do, that they didn’t do.

From 1950 - 2005(ish) there were a small number of sources due to the enormous moat required to become a broadcaster. From 2005 to 2021, you could mostly trust video as the costs of casual fakery were prohibitive. Now that the cost to produce fake videos are near zero, I suspect we will return to a much smaller number of sources (though not as small as in the pre YouTube era).

Some of the “smaller sources” also distorted facts.

We might even have fewer than before - between Internet commentators and loss of confidence from AI, real journalism may not be as highly valued as it was before the Internet…


It will entirely be about trust. I don't think fakery is worth it for any company with > $1B market cap as trust is such a valuable commodity. It isn't like we are just going to have a single state broadcaster or something like that (at least, I hope not). However it is going to favour larger / more established sources which is unfortunate as well.

We're also seeing a barrage of commercials featuring AI generated animals talking like people. It's getting old.

You’re seeing commercials?

There’s your problem.


OTOH product placement is your "friend".

There will be people who care about trusted and reliably accurate news sources, and at least some of them are willing to pay for it. Think 404 Media.

But there are people who don't want their news to be "reliably accurate", but who watch/read news to have their own opinions and prejudices validated no matter how misinformed they are. Think Fox News.

But there are way way more people who only consume "news" on algorithmically tweaked social media platforms, where driving "engagement" is the only metric that matters, and "truth" or "accuracy" is not just lower priorities but are completely irrelevant to the platform owners and hence their algorithms. Fake ragebait drives engagement which drives advertising profits.


Suppose that I care about trustworthy and reliably accurate news sources and am willing to pay. How can I distinguish which ones are trustworthy and reliable? No offense to the folks at 404 Media, but I've never met a single one of them, and I have no reason to believe that they wouldn't lie to me for money. You clearly have your own prejudices and biases about which media organizations are honorable and which are not, which you're wrapping up as if it's about a "truthfulness" that you couldn't possibly actually verify.

To be clear, you don't need AI for this.

You can also just call the railroad and report the bridge as damaged.

Hoaxes and pranks and fake threats have been around forever.


I love hoaxes. But this also neglects the social and viral aspect. If lets say if an aged local member for parliament sees an image of the bridge after coming back from the pub, he will call into the authorities responsible. Now you can think of many other people who, upon seeing this, and having an initial reaction to it, have the power to enforce an action.

Calling directly into the railroad bypasses an authority chain. It negates the virality of it. These viral images are viral because they get shared and spread on their own just like a virus.

Telephone calls into authorities were never viral, they could never be spread. Although they may well have caused the desired reaction without spreading first! Many hoaxes back in the day were somewhat viral and did get spread, but the hoax went to the newspapers or the community first and spread there. A well crafted press release, some additional letters to the traditional media etc. A believable image makes for more believability. The hoax got spread because it was hard to debunk it as it was distributed before the debunking. Bypassing the effort to spread the hoax removes chances of effects.

Edits: my initial thought was "no trains run after midnight anyhow" as except on a few main lines its hard to find trains in the UK at night - so the cost of the bridge closure may have been very small. That with the amount and quality of the staff operating at that time of night. Taken together this leads to less of a cost of reaction, more of a chance of a knee jerk reaction from staff, less ability to consult nearby awake engineers and survey damage IRL. So while the hoaxers cannot plan an earthquake(!) it probably wouldn't have succeeded if the earthquake happened at 11am.


Again, I see this argument.

“Bad X has happened before and unsolved. Why worry about bad X^2?”

Personally I’d prefer if it remained at X so solutions can catch up. But that’s just me.


I think the implication is we already handle these events well enough pre-ai, and that the events are not necessarily more disruptive just because an ai was used to trigger them.

Implicit in this though is the assumption that the increase in awareness of these events has more to do with an ai being involved rather than the event actually being exceptional.


Yep, why give people computers? It just increases the number of bad X, before writing these type of hoaxes were much less common.

I see room for a platform that only does auth, reviews and perhaps indexing.

Since you didnt ask, let me needlessly elaborate.

You can have YouTube or X or Facebook "design" a web page for you but those are always extremely lame. Just have websites in stead?? Their moderation looks more like a zombie shooter. Wikipedia has some kind of internet trial but that is so unsophisticated that it might even be worse.

It could be a simple redaction with a number of seats that can be emptied when the users request it though a random selection of jurors.

The redaction makes suggestions and eventually removes your website.

The site can still be publicly available before and after, it just doesnt live in the index.


That leaves much more of a paper trail. People routinely are fined and jailed for pulling off such "pranks", partially because "fake threats"/"abuse of emergency response resources" are an exception to many freedom-of-speech laws.

A fake photo of a collapsed bridge however won't cross that criminal threshold.


If you create a fake photo/video with intent to cause disruption it absolutely crosses the threshold.

If you create one to prank your friend, and he ends up falling for it and sharing it in another group, and it gets to someone who alerts the authorities, without including the context of "this was sent to me by a guy who's a bit of a joker", and railway management's policy is to take all reports seriously rather than verifying their provenance...I find it hard to think anyone in that chain should really be held liable.

The person who alerts the authorities should be held liable - they had the option to verify before doing so, but chose not to.

Intent is a valid legal concept. Certainly there's no way to try "swatting" without crossing that line of intent, but (for example) less-threatening prank phone calls can be in the grey area.

I presume there is established legal practice for handling these kinds of things, but for generative images the legal limits won't achieve wide awareness until some teenagers and assorted morons get hauled into court.


intent is very difficult to prove.

"I was just memeing, sir"


You also don't need gunpowder to kill someone with projectiles, but gunpowder changed things in important ways. All I ever see are the most specious knee-jerk defenses of AI that immediately fall apart.

What you're probably failing to grasp is that all technology is good, and AI is technology, therefore AI is good. Notable examples are the printing press and the automobile. Would you prefer a world without those things? How ridiculous!

Please ignore "technology" such as leaded gasoline and CFCs. No one could have known those were harmful, anyway.


It's not clear to me that it "changed things in important ways" in this case if a call alleging serious damage to the rail would've similarly triggered a pause for inspection.

A phone call to railway management claiming stone fall on a track, a dead cow, a stalled car, etc will trigger a slowdown on that line, a call to the driver, and an inspection.

If that's not happening then management is playing fast and loose with legal responsibility and the risks of mass and inertia.


It's not about the possibility, it's about probability. People are posting more fake images and videos than ever before. More is categorically different.

Any hoax can be easily fought, if the punishment of getting caught is severe enough.

The problem is the justice system, that is optimized to protect a criminal and to offload the costs to the society, which is happy to be distracted with identity and moral supemacy arguments.


Amazingly, no one seems to have actually checked that this picture was really "circulating on social media". I've been investigating for the past hour or so and can't locate a single public post or reference anywhere other than reposts of the BBC article.

Typically, postings that gain traction have many many reposts and though some may be deleted, there's a long tail of reverberation left behind. I can't find that at all here.

I wonder if the hoaxer just emailed it to Network Rail directly?


Reminds me of https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell with the Moab plot point.

I really liked the first half to 3/4ths of the that book. The last part was less interesting to me but the Moab plot line and all the parts around anonymity/online presence I enjoyed.

> A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.


Under the other photos it says A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged and A BBC North West reporter visited the bridge today and confirmed it was undamaged

They may have first ran the photo through an AI, but they also went out to verify. Or ran it after verification to understand it better, maybe


So.. is this where the AI hype train starts to lose steam? One AI hallucinated and caused the incident, and another AI program just wasted everyone's time after it was unable to verify the issue. Sounds like AI was utterly useless to everyone involved.

> One AI hallucinated and caused the incident

I suspect that AI was prompted to create the image, not that this was an incidental "hallucination".

Cynical-me suspects this may have been a trial run by malicious actors experimenting with disrupting critical infrastructure.


There is precedent for state actors putting a lot of effort into a hoax like this: https://en.wikipedia.org/wiki/Columbian_Chemicals_Plant_expl...

> Sounds like AI was utterly useless to everyone involved

Maybe.

Imo, I think the advances in AI and the hype toward generated everything will actually be the current societies digitally-obsessed course-correction back to having a greater emphases on things like theater, live music, conversing with people in-person or even strangers (the horror, I know) simply to connect/consume more meaningfully. It'll level out integrating both instead of being so digitally loop-sided as humans adapt to enjoy both.*

To me, this shows a need for more local journalism that has been decimated by the digital world. By journalism, I mean it in a more traditional sense, not bloggers and podcast (no shade some follow principled, journalistic integrity -- as some national "traditional" one don't). Local journalism is usually held to account by the community, and even though the worldwide BBC site has this story, it was the local reporters they had that were able to verify. If these AI stories/events accelerate a return to local reporting with a worldwide audience, then all the better.

* I try to be a realist, but when I err, it tends to be on the optimist side


The tech giants sucking up all the ad revenue is what killed local journalism. Unless you can find a solution to that problem (or an alternstove fundong model), it's not coming back.

But just think of all the people that didn’t have to receive a paycheck because of all this efficiency!

It’s really incredible how the supposedly unassailable judgement of mass consumer preference consistently leads our society to produce worse shit so we can have more or it, and rewards the chief enshittifiers with mega yachts.


They have powerful untaxed monopolies in excess of the economic value tech companies themselves generate.

At some point, the value of their services come from the people who use their sites.


> Sounds like AI was utterly useless to everyone involved.

Not the hoaxer!


Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated

A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43)

I hope he was disbarred.

He was probably offered a role at some ai obsessed firm because of his “ai-native workflow”.

Or sent to court-ordered LLM Awareness classes.

Reminds me of a Reddit story that made the rounds about a professor asking ChatGPT if it wrote papers, to which it frequently responded afirmatively. He sent an angry email about it, and a student responded by showing a response from ChatGPT claiming it wrote his email.

> student responded by showing a response from ChatGPT claiming it wrote his email

Which is actually fine. Students need to do their own homework. A teacher can delegate writing emails.


But if he didn't delegate, and it said he did, that would suggest that the methodology doesn't really work.

I believe you just got whooshed.

Yes, I missed the student using the teacher's trust in those tools to make them even more angry and neuter their angry email that they (probably) actually wrote themselves. Well-played.

A person arguing in favor of LLM use failed to comprehend the context or argument? Unpossible!

I realize you might have failed to comprehend the level of my argument. It wasn't even about LLMs in particular, rather having someone/something else do your work for you. I read it as the student criticizing the teacher for not writing his own emails, since the teacher criticizes the students for not writing their own classwork. Whether it's an LLM or them hiring someone else to do the writing, this is what my rebuttal applied to. I saw what I thought was flawed reasoning and wanted to correct it. I hope it's clear why a student using an LLM (or another person) to write classwork is far more than a quality issue, whereas someone not being tested/graded using an LLM to prepare written material is "merely" a quality issue (and the personal choice to atrophy their mental fitness).

I don't think I was arguing for LLMs. I wish nobody used them. But the argument against a student using it for assignments is significantly different than that against people in general using them. It's similar to using a calculator or asking someone else for the answer: fine normally but not if the goal is to demonstrate that you learned/know something.

I admit I missed the joke. I read it as the usual "you hypocrite teacher, you don't want us using tools but you use them" argument I see. There's no need to be condescending towards me for that. I see now that the "joke" was about the unreliability of AI checkers and making the teacher really angry by suggesting that their impassioned email wasn't even their writing, bolstered by their insistence that checkers are reliable.


Apologies to everyone I upset by this comment. It was just an innocent mis-reading of the joke. Lesson learned.

You missed the entire point lol

Yeah, I'm really sorry. I didn't realize it would upset so many people.

Students (and some of my coworkers) are now learning new content by reading AI generated text. Of course when tested on this, they are going to respond in the style of AI.

ChatGPT: This looks like AI. I can tell from some of the pixels and from seeing quite a bit of training data in my time.

This is the fast way they can try, but it shouldn't be the most trustworthy way and shouldn't be in report.

If it's nano banana you can give it to Gemini bc it has artifacts

All these tool integrations are making it increasingly difficult to explain to non-tech people what these chatbots are capable of. Even more so as multi-modality improves (at some point image generation went from a distinct tool to arguably an inherent part the the models).

Yeah, talk about begging the question. Yikes.

It's not, but when you have 30 minutes to ship a story...

Yeah, it is frankly just plain bad epistemology to expect an AI chatbot to have answers on a matter such as this. Like trying to get this week's lotto numbers by seeking a reading in bible passages and verses. There is no way that the information was encoded within in there as it would violate causality. At best you'd have coincidental collisions only.

Yeah that hardly talks of the "journalist" being good at their job. At worst they asked a biased question like "has this photo been AI generated and if then how" or worse.

People tend to think that AI is like a specific kind of human which knows other AI things better. But we should expect better from people that do writing as their job.


Do you not think even BBC "journalists" are suffering from immense pressures to use AI for efficiency? It's everywhere

> "It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."

Im way more concerned of this statement than whatever is reported in the title.

How fragile is a society that is unable to make a simple visual confirmation of a statement without having a multiday multi-££ impact?


I think it's reasonable to expect that the staff involved in an emergency callout of this sort will be entitled to Time Off In Lieu, and that that TOIL will cause knock-on effects to staff rostering.

Delaying the inspection until working hours would have caused much greater disruption. Having a track inspection team on hand 24x7 to cover all potential routes would incur much higher staffing costs.

An on-call system backed by TOIL and accepting the risk of dealing with occasional re-rostering seems like a reasonable compromise to me.


You dont need specialized teams of people to testify that the photo on the right[0] is fabricated. Any police officer on duty could do that. You dont need people to say "the bridge is sound" just that the rumours are false.

Its like certain societies enjoy the rigidity they are in.

But i guess in a country where a "retweet" of the wrong opinion can get you in legal trouble it just easier to say that fabricating and propagating ai slop is also ilegal

[0] https://ichef.bbci.co.uk/news/1024/cpsprodpb/5e92/live/bc1e9...


Network Rail deal with dozens of reports of earthworks failures, landslips, vehicle strikes, and other problems affecting bridges and viaducts every day. They have well-tested procedures in place to investigate them. In some locations, and at some times of day, those procedures involve on-call staff.

Sure, "just follow the process" is a lot less exciting than coming up with an ad-hoc response - but when you're dealing with safety-critical infrastructure at scale, it makes a lot more sense than cowboying it and hoping for the best.


> Network Rail said the railway line was fully reopened at around 02:00 GMT and it has urged people to "think about the serious impact it could have" before creating or sharing hoax images.

> "The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.

I don't think this will work the way they think it will work. In fact, I think they just proved they're vulnerable to a type of attack that causes disruption and completely unnecessary delay to passengers at a cost to the taxpayer


Anyone to whom that information is relevant already knew that this vulnerability has existed for a long time.

How hard could it be — genuine question — for, say, Apple (Nikon, Sony,…) to embed a QR code (optionally) into an image.

QR leads you to a page, you upload image to page, hashes are compared, image-from-sensor confirmed.

Surely at this point we need provable ‘photography’ for the mass market.



What system would you create that prevents a camera from being pointed at a screen? Because if you can't block the analog hole, any verification scheme is trivial to bypass.

None but it makes it more difficult. People could “photoshop” before digital but most didn’t. Perfect is the enemy of good enough, etc.

you still need to check the bridge.

I could take a real image of the collapsed bridge, modify it somehow with AI, post it, you then say "hey, its not real, look, QR doesnt match, the bridge is safe"


Cannot image how often this will happen after we are buried under fake contents from AI.

Just realize that people and institutions are adaptable and their processes are not set in stone. We'll find a way through even if you or I can't imagine exactly how right now.

A lot of institutions, even crucial ones that we all depend on to manage important aspects of society, have barely started adapting to this newfangled fad called the internet. Maybe they’ll figure out what to do about generative AI somewhere around 2060.

The issue is provenance. We need cameras and phones to digitally sign photos so we can easily verify an unadulterated image.

You also want to be able chain signing so that for example a news reporter could take a photo, then the news outlet could attest its authenticity by adding their signature on top.

Same principle could be applied to video and text.


Signing something doesn't verify that it's real, it just verifies that you claimed that it was real, which everyone was already aware of. You can either hack a camera, or use an unhacked camera to take a picture of a fake picture.

Freight transport is cost-effective in terms of delays, approx. 2mn per minute in the three Pacific Railroad Surveys for a trans-contiental railroad by the War Department circa. 1850.

[1]:https://en.wikipedia.org/wiki/Pacific_Railroad_Surveys


I mean what does anyone expect from a future where images like this can be generated by any moron for any place, in thousands of variations, with just a few clicks? And then video?

I am surprised headlines like this are only coming out now. I've been saying it for a long time, but people said i am crazy. The web as we know it will be unusable. And a new one will not solve all issues, as we have already made ourselves too dependent on the current web and tech. So the impact on the real world is gonna turn a lot if things upside down. It's gonna be a lot of fun. But sure, let's keep pretending AI can either be nothing but bullshit OR we should only fear losing jobs to robots... i don't get why no one every thinks about the societal impact... it's so obvious, still... i am baffled...


>"It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."

It is no surprise to me that Network rail are so understaffed that any special event disrupts their work schedules for days. That is what they call 'efficiency' these days.

Edit: Aside. During a set of fire service strikes it was a relatively common opinion to say something like, 'of course they have an easy job, they get paid to just sit/lie down at the station'. I used to ask, 'what would you like them to do while waiting in case you need rescuing?'. No answer. I spoke to a fireman and he told me that in response to this kind of nonsense a bunch of pointless busy work was invented for them. When real was privatised in the UK they fired a lot of these 'inefficient' workers. After a string of rain crashes, the government had to renationalise Network Rail (the bit that maintains the infrastructure). Another case where 'efficiency' means harming people for profit.


It's a bit of a non story, even with the fake image.

From the article:

  Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.
...

  Railway expert Tony Miles said due to the timing of the incident, very few passengers will have been impacted by the hoax as the services passing through at that time were primarily freight and sleeper trains.

  "They generally go slow so as not to disturb the passengers trying to sleep - this means they have a bit of leeway to go faster and make up time if they encounter a delay," he said.

  "It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."
Standard responsible rail maintainance is to investigate rail integrity following heavy rains, earthquakes, etc.

A fake image of a stone bridge with fallen parapets prompts the same response as a phone call about a fallen stone from a bridge or (ideally !!) just the earthquake itself - send out a hi-railer for a track inspection.

The larger story here (be it the UK, the US, or AU) is track inspections .. manned or unmanned?

Currently on HN: Railroads will be allowed to reduce inspections and rely more on technology (US) https://news.ycombinator.com/item?id=46177550

https://apnews.com/article/automated-railroad-track-inspecti...

on the decision to veer toward unmanned inspections that rely upon lidar, gauge measures, crack vibration sensing etc.

Personally I veer toward manned patrols with state of the art instrumentation - for the rail I'm familiar with there are things that can happen with ballast that are best picked up by a human, for now.


They should already be able to detect line breaks using old technology. They send current pulses down the line to detect stuck switches, since stuck switches can cause collisions. Also, the pulses are conducted through the wheels and axles of any trains, so they can use resistance and/or timing to figure out where the trains are.

Having said that, if it was 2020 and you told me that making photorealistic pictures of broken bridges was harder than spoofing the signals I just described, I’d say you were crazy.

The idea that a kid could do this would have seen even less plausible (that’s not to say a kid did it, just that they could have).

Anyway, since recently-intractable things are now trivial, runbooks for hoax responses need to be updated, apparently.


> They should already be able to detect line breaks using old technology.

Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though.

Hence the need for inspection.

> runbooks for hoax responses need to be updated, apparently.

I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.

If anything urban rail is in a better position today as ideally camera networks should hopefully rapidly resolve whether a bridge is really damaged as per a fake image or not.


> I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.

Ideally? Sure.

But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes? And as soon as your inspection finishes, they do it again?


Yeah; it’s completely a matter of frequencies and probabilities. Also, technology keeps improving.

If I were working for the train line, and bridges kept “blowing up” like this, I’d probably install a bunch of cameras and try to arrange the shots to be aesthetically pleasing, then open the network to the public.

The runbook would involve checking continuity sensors in the rail, and issuing random pan/tilt commands to the camera.


plausibly correlated with what?

This correlated with an earthquake - this is the event that should have triggered an inspection regardless.

> But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes?

In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.

> And as soon as your inspection finishes, they do it again?

Sounds like a case for cyber crimes and public nuisance.

It's also no different to endless prank calls via phone, not a new thing.


> This correlated with an earthquake…

Plenty of disasters don't. "No earthquake, no incident" obviously can't be the logic tree.

> In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.

"Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though. Hence the need for inspection."

Sounds like you now agree it's less a need?

> Sounds like a case for cyber crimes and public nuisance.

"Sorry, not much we can do." As is the case when elderly folks get their accounts drained over the phone today.


> It's also no different to endless prank calls via phone, not a new thing.

Of course it's different. If I do 5 prank calls, that takes, say, 15 minutes.

In 15 minutes how many hoaxes can I generate with AI? Hundreds, maybe thousands?

This is like saying nukes are basically swords because they both kill people. We've always been able to kill people, who cares about nuclear weapons?


If whatever technology they installed said everything was fine, I would still want them to do what they did because the costs of being wrong are so much higher than the costs of what they did.

The point of that technology needs to be to alert you when something is wrong not to assure you that everything is fine whenever some other telemetry indicates otherwise.


Any idea how the road barriers in the USA detect a train to lower themselves? I assume it's something to do with current passed from one rail to the other through the axle?

When I stuck train wheels on my DeLorean and rode it down the tracks it lowered the barriers automatically which caused a bit of a traffic incident in Oxnard.


There are sensor sections on both sides. If you short the tracks together with a large enough wire, it triggers the signal box. Actually learned this at the MIT swap fest when manning the back gate a decade ago. Got some cheap alligator clips and strung to them together, no luck.... Larger gauge copper did trigger it, and confused a ton of people when no train came by lol

> They send current pulses down the line to detect stuck switches, since stuck switches can cause collisions.

That's not done in any European rail network I am aware of. The switches have, well, switches that confirm if the mechanical end positions have been reached, but there is no confirmation by current pulses on the actual rails themselves.

> Also, the pulses are conducted through the wheels and axles of any trains, so they can use resistance and/or timing to figure out where the trains are.

That technology is, at least in Germany, being phased out in favor of axle counters at the begin and end of each section, partially because axle counters allow speed and direction feedback, partially because it can be unsafe - a single locomotive braking with sand may yield a false-free signal when sand or leaves prevent the current passing from one rail to the other.


Regardless of how many people it disrupted or not, it’s not a non story.

It’s highlighted a weakness. It’s easy to disrupt national infrastructure by generating realistic hoax photos/videos with very little effort from anywhere in the world.


It's not a new story, nor has it highlighted a new weakness - people have had the ability to claim tracks are covered in stone or by a dead cow for a good many years now.

Tracks have cameras to rapidly discount big claims, in this specific case there was an actual earthquake which should (and likely did, the story doesn't drill down very deep) have triggered a manual track inspection for blockages and ballast shifts in of itself.


If I do a prank call, it's easy to see the intent to disrupt.

If I post AI generated images to twitter, and those get amplified by my followers (that might or might not be real people) enough to surface on some rail engineers feed, well, that's just me showcasing my art, no harm intended, right?


If I if enough hypothetical if's that's just a giant empty if, right?

It'd be useful if commenters view this from the pragmatic real world track maintainance PoV.

Verifiable calls from the public about blocked lines made to official numbers with traceback etc. carry more weight than social media buzz.

In urban rail the bulk of AI generated images can be discounted via camera feeds and sensors (eg: there's no indication of a line break so that image is BS).

There are already procedures to sift prank calls from things that need checking, to catch serial offenders and numbnuts that push bricks from overpasses.

In the specific instance of you hypothetically "just me showcasing my art, no harm intended" .. in a UK jurisdiction that would fall to the estimation of the opinion held by a man on the Clapham omnibus as channeled by a world weary judge with an arse sore from decades of having such stories paraded before them by indolent smirking cocksures.

YMMV.


90% manned. A lot of money and time goes into getting track access.

And collecting unmanned data is still such a pain. At the moment, you stick calibration gear to a train and hope it gets as much noise free data as it can. All whilst going at least 40mph over the area you want - you’re fighting vibrations, vehicle grease, too much sunlight, not enough sunlight, rain, ballast covering things, equipment not calibrated before going out etc etc.


Remember Moab

In a extreme safety-first organisation — and the GB railway is exactly that — it's easy to exploit that weakness.

It actually had very minimal impact. An hour or two wasn't bad for an organisation which stripped staff to a bare minimum, and for the area.

And it's very much the customer's job to work for the railway these days: it's our job to report police matters we are told incessantly with announcements. It's our job to buy the right ticket as there are very few ticket staff and staff with any knowledge these days. It's our job to use third party websites during disruption and to Tweet the railway company for assistance because again there is not enough staff.

So Network Rail is not going to come out and say "it's absolutely our job to be aware of all our infrastructure at all times and our defence to this new threat is to bolster staff and CCTV and reduce our reliance on third party reports"


You don't need AI to make these hoaxes, pranks have been around forever etc etc.... but as with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. In this case the low barrier to creating the fake media coupled with the pervasiveness of social media networks and their reach (also networks that aren't new), affording the rapid deployment and significant impact by bad actors.

The problem is the scale. The scale of impact is immense and we're not ready to handle it.


Don’t trust, only verify.

What would that have looked like in this instance?

Someone inspecting the bridge in person?

Exactly

Much of the world relies on general well-behavedness. The whole Andon principle doesn’t work if you’ve got asshole employees. With the public you don’t have a choice. You have to stop the trains because otherwise everyone will murder you if it turned out to be true. So better to be defensive.

When the west devolves into a low trust society because of things like this and the relentless importing of people from such, it will lose the advantage of being a high trust society. Equality for all!

I'd consider places with no school shootings higher trust than those with 300 school shootings every year :)

Sure, go for the cheap shot (heh) at the Americans - but the news article is from the UK.

I think we can reliably say that the global West's current lack of trust is nothing other than home-grown.

"think about the serious impact it could have"

They do ... that's why sociopaths do such things.


LLM AI has led to job losses (either indirectly for moving investments into AI instead of people) or directly. Generative imagery has and will lead to more bullshit election outcomes, people getting blackmailed, scamed, things like this train stoppage etc etc. The list is endless. Not even getting into how the AI bubble burst will make most of us poor when the huge stock market crash comes, but hey whatever...

What good has it brought us (not the billionaire owners of AI)? It made us 'more effective' and oh instead of googling something and actually going to a link reading in detail the result we can now not bother with any of that and just believe whatever the LLM outputs (hallucinations be damned).

So I guess that's an upside.

(before the AI god bros come: I am talking purely about LLMs and generative imagery and videos, not ML or AI used for research et al)


> LLM AI has led to job losses

I can confirm, the trend now in enterprise CMS deployments is to push for AI based translations, and image assets generation, only pinging back into humans for final touches, thus reducing the respective team sizes.

Another area are marketing and SEO improvements, where the deal is to get AI based suggestions on those improvements, instead of getting a domain expert.

Any commercial CMS will have these AI capabilities front and centre on their website regarding why to chose them.


I belive you are right and in due time this will lead to people dying needlessly as demonstrated in this article in The Guardian: https://www.theguardian.com/society/2025/dec/05/ai-deepfakes... where in this case it was a scam for a "harmless" medication but serious disruptors could easily up the game. What stuck with me most from that article is that we have currently no means to enforce such things get taken down which prolongs the potential for real damage. And all of that for a little more "efficiency".

Yet another attack vector for the Russians.

https://en.wikipedia.org/wiki/Russian_sabotage_operations_in...

See e.g. https://www.polskieradio.pl/395/7785/artykul/2508878,russian... (2020)

> Almost 700 schools throughout Poland were in May last year targeted by hoax bomb threats during key exams, private Polish radio broadcaster RMF FM reported.

> It cited Polish investigators it did not name as saying that a detailed analysis of internet connections and a thorough examination of the content of emails with false bomb threats turned up ties to servers in the Russian city of St. Petersburg.


UK is really good at self-sabotaging and giving itself away to corporate interests (wealth is gonna trickle down any minute now, I'm sure of it!), Russians can happily just grab popcorn and enjoy the comedy show, no active participation necessary.

> Yet another attack vector

AI-Generated disinfo has been a known attack vector for the Russian regime (and their allied regimes) for years now [0][1].

[0] - https://cyberscoop.com/russia-ukraine-china-iran-information...

[1] - https://cloud.google.com/blog/topics/threat-intelligence/esp...


[flagged]


> If your cat has worms, do you blame it on Russia or Best Korea?

Best Korea of course. The Worst Korea could never do this kind of thing.


Am I bovvered?

The BBC says the hoaxer should consider the effect on other people. Should Sir Keir, who wants to "turbocharge" "AI", perhaps consider the effect on other people?

So far we have almost no positive applications for the IP laundering machines.


Earlier this year:

> Vance told world leaders that AI was "an opportunity that the Trump administration will not squander" and said "pro-growth AI policies" should be prioritised over safety.

Our PM definitely won't be adjusting his position. He's been told:

https://www.bbc.co.uk/news/articles/c8edn0n58gwo


Law of Buddha: Older tech will have less side effects and more benefits, while modern tech will mostly have side-effects only. Because the older tech came out of need, while modern tech comes out of greed.

Modern tech annoys older tech, like birds poking at dinosaurs. Trains enabled economic progress, which gave rise to computers and AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: