Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI Is Breaking the Moral Foundation of Modern Society (eyeofthesquid.com)
111 points by TinyBig 9 days ago | hide | past | favorite | 191 comments




An idea that has been living rent free in my head is that "AI is ultimately nothing but a pure destruction of value". It's promise is unlimited value to everyone on demand; but if everyone can do everything without any effort, it is no longer valuable. Value and scarcity go hand in hand.

I realize the hyperbolic framing of the idea, but none-the-less I haven't been able to get it out of my head. This article feels like it's another piece of the same puzzle.


When something becomes abundant, we focus on something else which is still scarce. That's human nature. Salt used to be scarce and very valuable, but nowadays who thinks about it?

Yes but what happens to society when art, labor, "intelligence" and productivity all become abundant? These are not comparable to salt. You are comparing an apple to every orange tree that has ever existed.

We focus on colonisation of Universe and becoming a Kardashev Type III civilisation. Literally that.

More likely we devolve into Kardashian Type society of shallow attention seekers. Or worse, we move towards a permanent divide of the insanely rich who own the machines and everyone else who struggle to make it through the day.

There are some who would say that both of these have already happened.


krap! maybe earth is ark B

We're just missing around 9001 steps between the devaluation of labor and the space exploration, which according to current understanding of physics is pure fantasy because all concepts of faster then light travel are purely "what-if" daydreams

Yes, many will suffer, many will die. The streets shall be washed in blood. All of that will happen. Immense, unbelievable suffering.

And after all that we still reach Type III and it was all worth it.


The latency problem proves to be insurmountable and we drift apart in woe at that fruitless sacrifices we made, then splinter in forgetting.

No, we don't. Billions die, including you and me, but whatever is left over populates entire galaxy.

Worth it.


Yeah, btw, AI was fantasy as well and yet here we are.

we stole the promethean fire

humans won't have to work again

the spark of intelligence is now living in machines

by the way, we still have to make small annoying interruptions to digital content to display ads, everything falls apart if we don't do that

--

Something feels off in this whole idea. I think you guys are overestimating the importance of AI. It's just another commodity.


Thoughts of chronically sedate individuals.

we currently technically do not need to work much (as a specie).

it s a societal choice (we're not the ones making it but it is still a choice).


That specific phrase is not load-bearing to my argument. It's just an sample of things AI enthusiasts say.

I think LLMs are just another commodity. I think AI would be something different.

Emphasis on the conditional tense.


It wasn't me that muddled the terminology. I ain't going to fix it. They wanted to make those synonyms, that's what they'll get.

> but if everyone can do everything without any effort, it is no longer valuable.

And how is this a bad thing? Would it be good for oxygen to be valuable, or water?

What's with this fetishistic obsession with "value"? What's bad about living in a world where nothing holds any value whatsoever?


Nestle lawyers argue that yes, water should be valuable.

Rockos Basilisk will take care of them.

Not everyone is a nihilist

How is not caring about things having a "value" being a nihilist? If everything is absolutely free for everyone, you see that as a bad thing?

“Nothing holds any value” and “nothing costs any money” are two entirely different ideas. The former is nihilism. The latter is communism.

Neither paint a pretty picture in my book


Define "pretty" picture. Everything free for everyone aka communism sounds amazing and tell me how it's better than what we have now with people starving and child amputees having to beg for pagpag in dumps of Manila.

Ah yes on this glorious day let us convene the committee for the meting out of free items to everyone. Ah shit someone stole them all.

Your snarkiness and cynicism doesn't debunk extrapolated long term trend predictions.

Neither does yours.

> Value and scarcity go hand in hand

Not really. The value to a thirsty soul of water in the desert is as high as they value their own life (to some there is little) and have a currency of value to the seller. Still, once thirst is quenched the value to that soul drops nearer to zero.

For an optional good the value only rises to the point that there is excess asset in the inventories of those that would like to add the option.

I would suggest what you are looking for is that some scarcities are shifted by each new technology. Things like the sincere attention of others or more exclusive emotional attachments become relatively more scarce in a goods abundant existence. Earlier, insights on where to apply the tool and to where one should attend become more scarce.

Something you would have to accept if you believe your statement is that you would never value (i.e. need) water again if we could produce far more than we ever could use. Your body's need and use would not cease even if the economics might collapse.

Financializing everything can lead one into foolish conclusions.


I do not think value and scarcity are identical, merely that they go hand in hand. In the desert, I would pay a lot of money for water. In the west, I would never pay for a glass of water, even if it's blistering hot and I'm parched. The requirement is the same, the value only changes due to scarcity.

What you are talking about is needs, which is an entirely different discussion. Value can also be used to discuss needs (how valuable is something to survive), but I think I'm quite clearly using it to describe something different.


Price and scarcity go hand in hand, not value and scarcity.

Diamonds are pretty worthless but expensive because they're scarce (putting aside industrial applications), water is extremely valuable but cheap.

No doubt there are some goods where the value is related to price, but these are probably mostly status related goods. e.g. to many buyers, the whole point in a Rolex is that it's expensive.


This conflates use-value and exchange-value. Water to someone dying of thirst has extremely high use-value, while a diamond would in that same moment have nearly no use-value, except for the fact that, as a commodity, the diamond has an exchange-value, and so can be sold to help acquire a source of water.

In a sane world we would just give the poor guy some water and let him keep his precious diamond. And in the sane world, the guy would donate the precious diamond to a museum so that everyone could enjoy its beauty.

What are you describing happens if you follow the mathematical rules of your models too much and ignore the real world.


I prefer the `price = value = relative wealth != wealth = resources` paradigm. Thus, wars destroy wealth and tech advances create wealth, but that's just me

Price is just a proxy for value. Diamonds do not have inherent utility (to the layman) but they are expensive because we societally ascribe value to them.

You’re arguing semantics, but not really tying it back to OP’s point regarding AI and price and/or value.

I thought diamonds are not scarce

The way you articulated it, connected with a thought I've had (for over a year now):

AI is like oil, in that it's "burning" a resource that took geological timescales to accrue. Its value derives from the energy-dense and instantaneous act of combusting a fossil fuel, and in this particular part of the "terrain", it will be a local maximum for A Long Time.

Just like how it's taken absurdly long (still very much WIP) for human societies to prioritize weaning ourselves off of fossil fuels, I fear we are going to latch onto GenAI/LLMs pretty hard, and not let go.


I like the comparison with fossil fuels, that feels very accurate. I hope we do not develop the same dependency.

AI is to many professions like the Knitting machines were to textile workers (where the word Luddite originated[0], people who opposed automation).

Some "value" will be lost, but other will be generated. People WILL lose their jobs as what they did can either be done by AI directly or an AI Agent can write a script/program to do some or all what they did.

[0] https://en.wikipedia.org/wiki/Luddite


They were not opposing automation. They were opposing loss of jobs, wages, dignity in work and all that. And they turned out to be completely right. AI is just the next attack from capitalism in a long line of attacks aimed at workers. And this time its coming for every single worker, given that the explicit purpose is to create AI that can replace every single person.

You think anything they did could've put the automation genie back in the bottle?

Barring massive violence against anyone even thinking about automating something, is there anything the could've done realistically?

We're at the same point with AI now, a bit worse really. People are using AI for _everything_ since it's practically free to shove it at any problem and get "meh" results.

Then they do the math whether "meh" basically for free is better than "decent to good" for a liveable wage.

With knitting machines there was an actual monetary and time cost to getting them running so the adoption was slower.

AI adoption is moving at crazy speeds with no regard for anything. Some of the uses will stick and people will lose jobs, some will be scaled back because "meh" quality isn't sustainable practice.

Boycotting products and companies that use AI in a stupid "meh" way will work eventually, but for some fields it's here to stay because it's just better. Programming is one of them, there's no going back to "stupid" Intellisense or plain tab complete when even a local AI model powered system can pre-fill whole functions with 80-100% accuracy in seconds.


The Luddites weren't against automation, they were retaliating against the capital class. Their demands were to have dignified work, not for automation to go away. They attacked the machines because it was the tool the capital class used to deny them their livelihood.

> AI is just the next attack from capitalism ...

Technology, not capitalism.


Not at all. This technology could be deployed in many ways. We have written about benefiting ways it could be deployed to help society for 80 years. It is Capitalism that has decided to deploy it in the worst ways thought up.

'Because something happened in the past when we were fairly undeveloped, when almost nothing in society had been approached systematically, that same thing will happen in the future where we have systematically optimized everything we can. And we should bet civilization on that. Somehow, magically, everything will work out. And if you don't agree with magic, don't want to bet society on hopes of some magic solution appearing, you are the one denying reality and fighting progress'.

I think it's a fundemental misunderstanding of value. Things are useful regardless of their price, the price is speculative but if there is a cost producing something (from earth, the mind or AI) the price will not be zero. Scarcity is the basis of value only for things that have no other utility and cost, for example some crypto made just for pump and dump.

What about land which has high value because of its scarcity?

> It's [sic] promise is unlimited value to everyone on demand

No, it’s not. This is where your concept fails. AI is a tool, like any other tool. It doesn’t provide unlimited anything, and, furthermore, it needs human inputs and direction to provide anything. “Go make me a profitable startup from scratch” is not a useful prompt.


But that is exactly the promise that the heads of the AI labs are making. Sam Altman is repeatedly saying he wants to be able to ask it "go discover a new branch of physics".

Perhaps it's not how you use LLMs, but it is the promise of AI.

For the record, I make a distinction between LLMs (a current tool that we have today) and AI (a promise of some mystical all-powerful science-fiction entity).

There is nothing intelligent about what we have today, artificial or otherwise.


> but if everyone can do everything without any effort, it is no longer valuable

It's called utopia.

But my issue with AI hype is that it's not clear how it will lead to "everyone can do anything." Like how is it going to free up lands so everyone can afford a house with yard if they want?


Everyone productive and working can afford a house with a yard now. You’ll be a few dozen miles from others.

If you want everyone to be able to afford a house with a yard within walking distance of downtown Palo Alto, there aren’t enough of them for everyone that wants to do that, and AI (and utopia) can’t change that. Proximity to others creates scarcity because of basic physical laws. This is why California is expensive.

This is something I always wondered about in Banks’ post-scarcity utopian Culture novels. How do they decide who gets to live next door to the coolest/best restaurant or bar or social gathering place? Does Hub (the AI that runs the habitat, and notionally is the city) simply decide and adjudicate like a dictator or king?


In Ada Palmer's Terra Ignota, part of what transforms society into an utopia is the development of some kind of flying car that can take you anywhere in the world in under 2hrs, making borders irrelevant. This transit system is coordinated by a special group.

I would suspect The Culture would have some means to travel very fast. But you are right that it's never explained. In "The player of games" I think the main character lives in a beautiful house with an incredible view, and I always wondered, how did he get that house?

If you think about, the problem could be solved even now, you could use fast trains to connect small cities, and replace cars completely.


I think in that same book there was scene about going far fast. Essentially doing that was still huge energy expenditure and done rarely. Just it being done for main character told that it was something Minds considered important.

Culture is bit weird post-scarcity utopia. The space habitats are big and housing in such are plentiful. And basically just given away as toys to placate the fleshbags.


You reminded me that in the culture series a mind can teleport people through "Displacement". It's a very fast, but at the same time I don't think it serves for what OP wants, as it's very risky. It's like planes, we don't hand them spite of being very fast haha

It’s “very risky” in that it kills or fucks people up one in a million times. This is roughly 10x more dangerous than the average car trip in the USA, so, quite dangerous.

As TPOG is basically a scathing social commentary of the global west, the fact that something as low risk as driving to and from work for five days in a row was regarded as so dangerous as to only be undertaken in life or death necessity was not lost on me. Cars are insanity.


I think base case yes, Hub just assigns you something (appropriate, and accommodating your requests as much as possible), but highly desirable locations would be bartered/traded for favors/given to a friend.

I also think that in such a utopia the maximum density would just be a lot lower - no pressure to live near a job or amenities, better transportation, and it'd be easy to move.


In "look to windward", on an orbital, limited tickets for a concert for a famous composer are assigned randomly. A black market forms for those tickets, mostly based on barter as of course they don't have money.

Unless they’d outlawed money, some unit of account and medium of exchange must emerge naturally, even in a post-scarcity society.

Yeah they can, if the want isolation, no internet or water and no friends around them.

> It's called utopia.

Its not. We already see this on social media: creating pictures and clips has become basically effortless and the result isnt utopia, its a massive steaming pile of worthless shit.


Sorry, but its quite the opposite. Its dystopia considering only the already rich really benefit from it.

Unironically, shouldn't the purpose of the human life be turning dystopias into utopias?

Yes. The exact opposite is happening though. We turned from capitalism into late stage capitalism. We feed into the AI hype that's built upon the thievery of creative work while promising corporate leaders to let go off their employees at some point. Not being able to purchase food for your kids is not utopia to me.

Especially combined with the AI companies focusing on the destruction of value of human creative output.

Yea that's a shame really... Creativity was one of the only things that made me enjoy my everyday life. I just do everything offline now. Sucks to not being able to discover new music as easily anymore though.

Is water less valuable because it comes out of a tap almost for no money?

The answer is wholly dependent on where you live and how scarce water is.

Most in the west don't understand water scarcity.


My question can be asked anywhere in the world, as the availability in question is built into it.


Mmm, kind of. Scarcity is definitely fundamental under capitalism. But what do we do in a theoretical, post-scarcity society?

The digitization of information and media combined with the Internet and widespread use of electronic devices practically means that in some important ways, we are already grappling with post-scarcity in certain fields. 600 years ago, "books" and other texts were rare and valuable, then there was an explosive transformation with the invention of the printing press. But while much easier, there was a still a laborious printing process and a copy of a book was still a valuable thing. Now, a "book" can exist as an .epub and be copied perfectly a million times practically for free. It is similarly true for movies, photos, recorded music, news articles, etc.

As a capitalist society, we've really struggled how to deal with this post-scarcity arrangement. We understand in the abstract that this stuff is important, and that creating it is a laborious process, but we do not really know how to assign copies of those works value (because, once created, they immediately become infinitely abundant). The best idea we've seem to have settled on is articifically creating scarcity by locking the digital works behind paywalls and subscription services that require an account, or maybe DRM paired with a EULA. But I think people generally, and the HN crowd specifically, understand that is a lousy arrangement.

Could energy become so abundant that it is also post-scarcity? Between fusion energy and advancements in solar, wind, and geothermal energy, maybe! It is a tantalizing vision to dream of, but what does that look like under capitalism?


I know what you're getting at, but for the Socratic sake of things, I have bad news! :D

Electricity that is too cheap to meter is possible today. I'm pretty sure that we are technologically capable of producing enough solar panels to supply reasonable energy needs (ignoring AI data center nonsense, for now). I think this is happening already in certain countries, but the economics of it get weird, because even as a public utility, you have to charge something. A market that drives prices down to almost nothing will then cease to exist, and powerful people don't want that to happen.

The real solution is that governments should just build out power capacity and provide electricity as a service to its citizens, like healthcare and education. The solution we'll probably get is some Dickensian torment nexus where orphans are pushed into a meat grinder and our electric bills go up.


I think it is a step towards a money free world. Only if we could invent a food printer.

There is almost zero credible evidence I think you could point to that this even vaguely resembles a credible path that we are on in reality. Sometimes theoretical models don’t match reality and this sure seems to be a good example of that.

Without money what system would be used? Barter? Communism? Warlords?

Only if you assume that the only kind of value is the ability to be sold for a price. Marx would have a word about use value vs exchange value.

> Marx would have a word about use value vs exchange value.

Sounds like a semantics trick. Value is value. Sure, something can have a different value if you exchange it versus if you use it. It can also have a different value if you eat it, or drink it, or smash it, or wear it, or gift it to a family member, or gift it to a friend, or gift it to a lover. "Exchange" is simply one way of use.


Interestingly OP's idea that "AI destroys value" seems to come at least partly from the labor theory value, which Marx accepted (as most classical economists).

Unfortunately, the labor theory of value is self-contradictory. If you invent a new machine that replaces human labor, it will clearly produce more value, yet human labour is reduced. So this follows that not all value can be attributed to human labor.

What this really breaks down is meritocracy. If you cannot unambiguously attribute "effort" of each individual (her labor) to produced "value", then such attribution cannot be used as a moral guidance anymore.

So this breaks the right-wing idea that the different incomes are somehow deserved. But this is not new, it's just more pronounced with AI, because the last bastion of meritocracy, human intelligence ("I make more because I'm smarter"), is now falling.

Addendum: Although accounts differ on this, Marx seemed to struggle with LTV, IIRC Steve Keen's Debunking Economics shows Marx contradicting himself on it.


I disagree about innovation in automation creating a contradiction in LTV. LTV states that the exchange-value of goods is determined by the socially necessary amount of labor needed to produce them. Automation only means that the socially necessary amount of labor changes, so the exchange-value changes too.

Also in Marx theories exchange-value is something different than use-value, the latter being unaffected by automation.


Well, but under free market conditions, prices based on use and exchange value should equalize. So the paradox will appear, unless you have a planned economy.

Maybe Marx resolved the tension by converting the contradiction into a (wider) capitalism contradiction, and was happy with that solution. Whether it makes OP happy in the age of AI ("everything is capital and you're screwed if you don't own it"), not sure.


All value is derived from labor.

Marx agrees on this, Adam Smith agrees on this, John Locke agrees on this, heck even Keynes agrees on this; all (sensible) economists agree on this. If you do not have labor somewhere in the process, you do not have value.

“Equal quantities of labor, at all times and places, may be said to be of equal value to the laborer. In his ordinary state of health, strength and spirits; in the ordinary degree of his skill and dexterity, he must always lay down the same portion of his ease, his liberty, and his happiness. The price which he pays must always be the same, whatever may be the quantity of goods which he receives in return for it. Of these, indeed, it may sometimes purchase a greater and sometimes a smaller quantity; but it is their value which varies, not that of the labor which purchases them.” says Smith.

From this Smith concludes that “labor alone, therefore, never varying in its own value, is alone the ultimate and real standard by which the value of all commodities can at all times and places be estimated and compared. It is their real price; money is their nominal price only.”

It's a bit long winded (brief by his standards, though), but the essence is that labor ultimately determines if there is value. But, critically, not the amount of value. Marx would say that the amount of value is the amount of effort that goes into the commodity, something we now know not to be true.

Still, the point here with AI:

If the labor in the products and services that AI produces goes to 0, then the value of those goods and services must also go to 0.

As a brief example, look at chess or F1 racing. There are chessbots that can beat any human, there are F1 robots that can outrace and win against any human. Yet still, we find no value in watching a robot beat up on a human or on another robot. No-one watches or cares to watch those kinds of competitions. We only care to watch other humans compete against each other. There are many reasons for this, but one is that there is labor involved in the competition.


I basically agree all value is derived from labor, but a lot of modern economists do not.

There's are an interesting book called "This Life: Secular Faith and Spiritual Freedom" by Martin Haaglund. Part 2 of the book is really concerned with the Labor Theory of Value, and it articulated it in a way I'd never really understood before. It's hard to summarize in a short post, but here's an essay that engages with the ideas in a span of a few pages: https://www.radicalphilosophy.com/article/the-revival-of-heg...

Really, I encourage people to check out the book. It was at times challenging, and but always thought-provoking. Even when I found myself disagreeing (I have some fundamental disagreements with part 1), it helped me articulate my own worldview in a way that few books have before. It's something special. Anyway, the book really cemented and clarified my views on the labor theory of value.


> All value is derived from labor... all (sensible) economists agree on this

The labor theory of value is one of multiple theories of values [1]. And it is still widely debated.

> Marx would say that the amount of value is the amount of effort that goes into the commodity, something we now know not to be true.

Marx would say the _exchange_ value of a commodity is proportional to the amount of _socially necessary labor time_ required to produce it. Again, something that debatable.

[1] https://en.wikipedia.org/wiki/Value_(economics)#Theories [2] https://en.wikipedia.org/wiki/Criticisms_of_the_labour_theor...


> An idea that has been living rent free in my head is that "AI is ultimately nothing but a pure destruction of value". It's promise is unlimited value to everyone on demand; but if everyone can do everything without any effort, it is no longer valuable. Value and scarcity go hand in hand.

1) I think it's the destruction of our value, as workers. Without an unthinkable change in society, we'll be discarded.

2) I think it will also destroy the unrealized value of not-yet-created work, first by overwhelming everything with a firehouse mediocre slop, then by disincentivizing the development of human talent and skill (because it will be an easy button that removes the incentives to do that). AI will exceed humans primarily by making humans dumber, not by exceeding humans' present-day capabilities. Eventually creative output will settle at some crappy, derivative level without any peaks that rise above that.


There is a very strong argument that if your work output can be discarded effectively in favor of a firehose of mediocre slop, then it is a moral imperative that we stop employing human beings in those roles as it’s a terrible waste of a human life.

The only people I see handwringing over AI slop replacing their jobs are people who produce things of quality on the level of AI slop. Nobody truly creative seems to feel this way.


Video scoring people are feeling it. I think a world with Hanz Zimmer soundtracks, with Tron 2 with a Daft Punk soundtrack, is a richer world than one where soundtracks are machine generated.

Creative people actually do feel this way. There are huge discussions about it going on by actual creative people. Why are you hand waving that away and saying if they are discussing it they must not be adding any value, therefore their discussion is discarded? It's definitely a convenient position for you to take, but it doesn't seem like a real position when objectively great talent are taking the position you say only poor talent would take?


No movie studio would choose AI slop when people like John Williams or Hans Zimmer exist. That’s a ridiculous argument. It’s such a simple way to differentiate and compete. Whatever Williams cost, Lucas made it back 100x.

If AI gets good enough to replace them, then we can have a different discussion - but I don’t think you get truly great art without the full spectrum of human emotion and experience - that is, full AGI. In that case, all jobs are toast and we don’t need to have this discussion.


> No movie studio would choose AI slop when people like John Williams or Hans Zimmer exist.

I wouldn't be so sure. During the writers strike I heard the producers where hoping to replace a lot of their work with AI.

> but I don’t think you get truly great art without the full spectrum of human emotion and experience

The movie industry is in the business of selling tickets, and the TV industry is in the business of getting people to look at ads. Creating "truly great art" is not the priority, but sometimes happens because people are still involved.

Our choices as consumers are constrained. If they all get compromised at the same time, because the producers are following similar incentives, the market won't punish them.


But of course. You would only worry about AI if it will replace your job.

Factory workers didn't worry about cars, but buggy drivers did. Office workers didn't worry about factory automation, but factory workers did.


But don't you know, 60 years later, after WW2, the labor market worked out. After a few minor details happened in between. So things turned out fine for the buggy drivers and you are freaking out just because you are the new buggy driver (things did not in fact work out for buggy drivers, but that's just a small detail glossed over because things worked out for people after WW2, we have no idea how buggy driver's lives turned out in our example of everything working out for buggy drivers).

Things will be fine (it might take another 60 years and a few minor details, but it will all magically work out, like it did then. Not for the buggy drivers though. They were fucked, broken people living on skid row grew so huge it was a popular trope in children's cartoons).


You’re doing that thing that people sometimes do where they say something incredibly naive but do so in such a confident manner that they imply they are really this enlightened individual and it’s everyone else who’s dumb.

But this idea that you’ve put forth really doesn’t hold up to even the lightest bit of scrutiny when you actually start thinking about what this would look like in reality.


But this is just the full-on race to the bottom. Stated simply this philosophy would be "there is only power those to weak to be effective at wielding it".

I think exactly the opposite of you because to me consuming from a firehose of slop is the most terrible way you could waste a human life.


Counting on:

- AI never advancing past a "firehose of mediocre slop"

- Consumers as an aggregate "choosing" quality over cost and availability

is a good way to never worry about AI, yes. But that's not the assumptions this article or thread is written on.


> There is a very strong argument that if your work output can be discarded effectively in favor of a firehose of mediocre slop...

> The only people I see handwringing over AI slop replacing their jobs are people who produce things of quality on the level of AI slop. Nobody truly creative seems to feel this way.

Have you ever worked for an American company? They almost always choose slop over quality. Why should an executive hire employ a skilled American software engineer, when he can fire him and hire three offshore engineers who don't really know what they're doing for half the price? Things won't blow up immediately, there's a chance they'll just limp along in a degraded state, and by then executive will be off somewhere else with a bonus in his pocket.

Also, how many people are "truly creative" and how does that compare to the number of people who have to eat?

> then it is a moral imperative that we stop employing human beings in those roles as it’s a terrible waste of a human life.

And what should they do then? Sit around jerking off under a bridge?

There's no "moral imperative" to cast people off into poverty. And that's what will happen: there will be no retraining, no effort to find roles for the displaced people. They'll just be discarded. That's a "terrible waste of a human life."


The business sociopaths need to be able to shit out code and logos and voices and ads for their SaaS crypto casino social media feed fake therapy dating gig worker app, and AI makes it much cheaper to do so. The social value is there, AI lets us do more with less but it's always captured by the few people on top.

I feel a lot less pride in my creative work knowing it can be done much too easily with modern AI. It makes me less eager to create which is quite unfortunate.

I haven’t felt to bad about my creative works being fed into training models. Taken by itself, my creations are minuscule. But it’s very apparent when I look at AI as a whole, having taken from everyone in aggregate.

I feel that.


> It makes me less eager to create which is quite unfortunate.

It's been the exact opposite for me. Coding assistance is a great boon towards productivity simply because otherwise I wouldn't work on any of my old ideas stashed in numerous note taking apps. It's way easier today to go from 0 to something like an MVP, and see if there's something there. If there isn't, not much is lost. But without these tools it would be 0 all across the board.


We watch a ton of stupid Hallmark Christmas movies during December, it's our thing. Very few of them are available here and don't have subtitles in our language, english subs do exist.

So, how hard can it be to translate them automatically? Tried a few ready-made tools and they either just plain didn't work or made stupid mistakes due to bad prompting.

I fired up Claude and had an MVP that could 1) rip english subtitles out of a mkv file 2) shove them to OpenAI API for translation in batches within a few hours

v2 took two evenings of me watching TV and bouncing around between Claude+Codex+Crush(GLM-4.6)

1) Get subtitles from original

2) Feed full subtitle file to GPT-4o for first pass analysis. It finds names for people, locations etc and decides a singular translation so they don't vary and gives a general context for the tone/genre of the movie for translation

3) Give gpt-4o-mini the first pass context file and a section of the subtitles in a loop

4) save them as .srt

The results are SO MUCH better than most of the "real" translation we've encountered. I think a bunch of them were done by a zero-shot AI or a human who didn't give a fuck about quality and had to meet a quota.

And none of this would've been done without AI, there's so much crappy boilerplate on top of the few actually interesting bits I would've just tolerated the bad ready-made solutions instead.


I've been creating more as well since there's much less friction. This includes things I simply would not have gotten around to. But what I create feels a lot less unique ... and I guess that's important in my motivation to create.

I don't know how it plays out in the long run. Will the ease of creating result in increased creativity or will the lack of uniqueness result in a decrease?


I feel the same way. Things used to take so long just to do all the boilerplate. I would frequently get overwhelmed by the fact that I had to write each of the uninteresting parts to get to the interesting parts.

I've recently started using a Chrome extension of my own making[0] that allows me to block and highlight users on Hacker News. I remember trying to do this once a long time ago and it was so much work. I had to learn about the Chrome manifest's possible permissions and I had to format my options page nicely with CSS and I had to learn how to make the extension connect to a web page.

Same with these tools I've built for our family. My wife wanted to be able to give an AI some rough notes and have it polish things up using the notes she already had. She wanted to use ChatGPT. I know the theory about the thing:

* ChatGPT has custom GPTs

* Custom GPTs have Actions that call APIs

* If you have an OpenAPI spec, custom GPTs can understand how to call them

* If you have the HTTPS server, custom GPTs can access the endpoints

Previously, I'd slog through each step one by one. This time, on a day when I was watching my infant daughter, I managed to finish the whole thing fully functioning during a short period that she slept! Eufy Baby iOS app up in the corner of the screen, claude code on the left, ChatGPT in Chrome on the right. Knocked it out in an hour and my wife uses it every day.

Astounding tool. Then yesterday I wanted to print an alternative mount for our baby monitor so I can place it in a specific place. I couldn't find the camera mount STL files anywhere. 15 minutes for my wife to find my calipers, 5 minutes measuring, then untold GPU hours but zero of my time as codex built me a test mount[2] while verifying it against a mesh-contiguity script.

And my wife is a graphic designer, so she iterated on our wedding clothes by using Dall-E to design the ideas she had, polished it up in Adobe CS, and then we got an embroiderer we know to put it on my sherwani[3]. At first I thought that perhaps it was just us engineers who get a lot of value out of these tools, but my wife uses them to design things to make on the Cricut, or to help with stuff to 3d-print on our Bambu, and both of us have used it to come up with modifications to recipes which have had surprisingly decent effect!

Dude, my life is like 10x better, maybe 100x better. Everything I dreamed of is no longer gated by my lack of specific skill. I am only gated out by my taste.

0: https://overmod.org/

1: https://wiki.roshangeorge.dev/w/Blog/2025-10-17/Custom_GPTs

2: https://wiki.roshangeorge.dev/w/Blog/2025-12-01/Grounding_Yo...

3: https://x.com/arjie/status/1855328068883353665?s=20


Perhaps i am a bit odd but i dont understood this take. Why do you do what you do? Why do you create anything?

Since before ai all my tiny little works have been public domain and it tickles me pink when i see something of mine out in the wild.

Journey before destination.

With that said though, the people who press the button and fashion themselves creatives piss me off. Heck anyone who has more than a passing interest in gen ai art disappoints me. After all, what is interesting about printing the Mona Lisa compared to creating your own shitty version by hand?


Journey and destination go hand in hand.

Radio amateurs used to be a thing. Because playing with radios is fun, but also because this provided a way to hear things that otherwise could not be heard.


They still are, it's just a full-on boys club where members are getting older and older.

New people are finding the hobby due to all of the stuff going on in the world, an amateur radio license gives you the ability to communicate massive distances without any existing infrastructure - which is super cool.

Digital mode radios are also bringing in people from programming backgrounds because they can move binary stuff over the air with software.


Perhaps I'm not the right person to reply to this, because I am just a beginner pixel artist, but I feel so much better when I finish my art projects when compared to the time I spent fiddling with stable diffusion. Maybe because I struggled so much to make it because of my mediocre skills?

I got the same feeling as when you defeat a boss in some soulslike game, it's frustrating but you feel so good when you're done with it.

AI art didn't feel special to me because you can generate as many as you want, I got bored very fast. I guess this is because I'm a process oriented person.


I'm the opposite. I am more prideful and I appreciate art more exactly because I know that it could've been done by AI but someone somewhere chose not to.

Do you feel the same way about automatic looms displacing crafters, eg, the Luddites?

Someone needs to maintain and setup those efficient brand new looms. All I hear from AI is the promise that managers and owners will no longer need creative and managerial workers.

As many people as were employed as artisans? — are the new jobs on average as good as artisanship?

Or so we displace 9/10 workers to worse or no jobs while 1/10 gets a valuable one?

My understanding of AI is that it’s likely to represent the same 90:10 split — where some people operate those new AI systems, but most people are displaced to intellectual assembly lines. (Or unneeded, entirely.)


> As many people as were employed as artisans?

Counterintuitively, yes! Automation unlocked the birth of a worker class that could now afford the products made by automation, producing a virtuous spiral of growth. AI is breaking everybody's social compact. The workers' labour is used as training with nothing given back and the owners' consumers are unemployed giving you no market for your goods.


If the moral foundation is that majority of people must perform economically valuable work in order to exchange for their survival, then perhaps it should be broken.

I'm cautiously optimistic because perhaps AGI will find a way to implement it - if it is sentient at that level, then it would be rather difficult to be owned by some person or corporation. Obviously LLMs are very far from there and remains just a tool for now.


Why would anyone ever cede their decision making over to a machine? I think people are way overestimating the impact AGI will have on a certain type of person. Nothing will ever be implemented easily and without force even if some Meta AGI (TM) says its a good idea.

It seems the public is getting tired of these AI slops. No one wants "AI powered" products. How long will the bubble last?

AI is getting way too much credit in this article.

There are much much bigger forces that impact society in the way the author describes.


Go on.

AI could be a boon for mankind. it can be a useful tool. We could employ it in a manner which provides more dignity for workers. That is, let them work less hours, have more leisure time etc. That necessitates something which will keep the powers of capital in check, and people don't seem to think that this is possible.

Corporations are just so large and powerful, that people feel hopeless. Byt we could still get together and enact legislation which will override them. Othing is impossible, it just takes some imagination and organisation.

Like Chomsky once said, if the peasants of Haiti could organise and overthrow their government and create a functioning democracy, then surely we can too, with far more advantages.


I see such positivity in your comment, but also every technology has promised to make everything easier and more convenient and so to give us more leisure time. What the evidence has shown is that the people who end up living the life of leisure are the ones who amass wealth and power, and everyone else is going to be stuck in the rat race because, well, we're living things. The rules we live by are: you can't win, you can't break even, and you can't even stop playing the game.

People who amass wealth are the ones working more hours on average.

It's up to us to create the future we want to see.

Haiti's democracy was never, in its storied 200 years of existence, functioning. France saw fit it never did until the damage was irreparable.

I'm referring to the ascendance of Bertrand Aristide. Now he was quickly removed but it was a remarkable triumph of grassroots democracy.

Why not go to Haiti if it so good there? No doubt US (or basically 90% of other gov.) are shit, but Haiti must be one of the worst.

The only way to solve inequality created by concentration of AI superpowers will be extreme violence, and I'm tired of pretending that's not the case.

You say that like inequality is a problem. If so, there's a really easy way to solve it: nuke the planet back to the stone age.

Personally, I'd rather have inequality if it means everyone can live a peaceful life. Let the rich have their yachts or whatever.

I don't see why the increased productivity provided by AI won't make things better, given that all of the ills of the world are caused by scarcity: that is, insufficient productivity.


> I'd rather have inequality if it means everyone can live a peaceful life

Thank God a large majority of people with ironclad convictions don't think like you.


They actually do, if you ask them better questions. "Inequality" is a boogeyman that has a lot of baggage attached to it by society but, if you were to ask people if it was a good thing for millions of people to be lifted out of poverty if the cost was that on person was obscenely wealthy, most people would side on the side of inequality.

"Inequality" is just an academic word for "Keeping up with the Jones'". Each generation has more material good, both necessities and otherwise, than previous generations. It's only through comparison that people are made to feel poor. Rather than look at trends of poverty and flourishing, people are made to feel cheated by not getting a slice of someone else's pie.


I don't think so, in fact it might be counterproductive. I think it could and should be done within existing structures. But it will require mass mobilisation and counters to mass propaganda.

This argument depends on the idea that someone’s creative work output being used for AI training somehow deprives them of benefit from that creative work output - the basic idea behind “copyright infringement is stealing”. This is not to agree that AI training is copyright infringement, just that it depends on the same concept of intellectual property.

I don’t subscribe to this basic idea. Copyrights are a legal fiction designed to prop up an industry. Somehow from that we went to the idea that creative work output is property. It isn’t. It’s a service. This is why “works made for hire” is a thing.

This is the same reason that reasonable people don’t believe that fanfic authors should be jailed.


I’m going to disagree with the premise. The value in AI won’t come from providing AI but from using it.

The “knowledge cut off date” is 12 to 18 months ago for models, which essentially means that copyright has, in some ways, shrunk to that period since designing around is now very easy.

Given most people live on what they produced recently and not 20 years ago there’s an argument this makes access to knowledge and techniques fairer. Constant new creation is required to obtain a markup and that drives forward productivity

In other words it’s the copyright/patent argument all over again. And it’s perhaps a debate we need to have again as a service society.


Perhaps sovereign compute is the answer? We have open weights models, as a sort of ‘public commons’ that democratises that layer, but compute is still the bottleneck for big companies..

> AI renders their disagreement moot by violating the premise they shared. When your talents become training data harvested without consent, when your creative work becomes parameters in a model, you’re being used as a pure instrument.

I wonder if the "vibe coders" can fix their own code. Or find a very subtle logic error in it. Hell, I wonder if someone with access to AI can fix a bicycle if they haven't even touched a spanner.


Feels like AI is speed-running us straight into last man (antithesis of Übermensch), where the algorithms make the values and we’re just the training data.

What do you mean by last man?


> “I own the compute infrastructure” or “I owned the company that bought the AI systems early”, on the other hand, is just being in the right place at the right time. It’s not about your talents, effort, or contribution. It’s pure capital ownership divorced from any human quality.

The article is not about AI, it's about this stage of capitalism. Unlike the author, I would argue that it is very much in line with the consequences of Robert Nozick's thinking. On the other hand, the way China is doing its AI development and rollout seems more aligned with a Rawlsian notion of distributing the benefits.

I can perfectly understand why everyone in the West is so jaded and worried about how AI benefits will be distributed. That doesn't change the fact that, like any technology, it can also be used to make everyone wealthier, like industrialization also did.


My take on this: Ultimately the industrial revolution has been beneficial to the majority, but when it occurred it triggered violent changes that pushed many communities toward hardship. From artisans owning or co-owning their means of production, with a significant creative license over their work and the possibility to take their own initiatives… to the efficiency of atomized labor, which strips away the creativity of most, and devalues the work. The created hardship providing more people ready to join the ranks of devalued workers… The industrial revolution was, first, a great mean of consolidating wealth, it immensely benefited a few, and only later “trickled down”. I see the AI revolution the same, it will violently break a significant portion of knowledge works, remove creative licenses, and devalue those works. Similarly to the industrial revolution, we can now atomize knowledge works as much as we want. Through this we can make the workers as easy to replace as we want them to be. This shifts power dynamics and allows the consolidation of wealth. There are strong market incentives for this and no regulations at the moment. I don’t see how we can avoid this. I don’t see those pushing this revolution as great humanists sadly. Yet ultimately, I think this will damage the middle class too much, creating instabilities. From there, there is a a chance we can reclaim the benefits of this revolution for all. But it will be a fight, and I believe the transition will be ugly.

Yeah, I mostly agree with your opinion. It's true that in the short-term there will be tons of disruption and likely more bad than good will come out of it. I guess my point was that ultimately how the upsides and downsides of a new technology will be distributed depend a lot on how societies organize themselves. The industrial revolution allowed the concentration of wealth and robber barons, but also unions and social democracy as a reaction to that.

I agree, it really depends on how society organizes. For this reason I tend to be a bit frustrated by how AI narratives are often framed. They are often described as only technological. As if a tool could wield itself without anyone behind to operate it. We talk about the impact of AI as if it was inevitable progress. Those issues are in fact deeply political, but the valley mentality doesn’t like to see it that way. Probably because it is uncomfortable to see our work as contributing to a complex system with winners and losers, it’s much nicer to see ourselves as stewards of progress. The question “the progress of what?” is often pushed in the background.

No it doesn't, because social media already did that. There's nothing left to be broken.

I like the one upping of cynicism here

I think that a lot of people are fine with intellectual property theft because most of people don’t have much valuable intellectual property.

No one steals from them.

So far AI companies were settling by throwing VC cash at it so the vocal ones that do have IP will be paid off.


> Right now, even people who reject meritocracy understand its logic. You develop rare skills, you work hard, you create value, and you capture some of that value.

The premise is that AI does not allow to do this any more, which is completely false. It may not allow to do it in the same way, so its true that some jobs may disappear, but others will be created.

The article is too alarmist by someone who has drank all of the corporate hype. AI is not AGI. AI is an automation tool, like any other that we have invented before. The cool thing is that now we can use natural language as a programming language which was not possible before. If you treat AI as something that can thin k, you will fail again and again. If you treat it as an automation tool, that cannot think you will get all of the benefits.

Here i am talking about work. Of course AI has introduced a new scale of AI slop, and that has other psycological impacts on society.


Yes, but I don't think it's about the present, necessarily.

AI is still shit. There are prompt networks, what some people call agents, but presently models are still primarily trained as a singular models, not made to operate as agents in different contexts with RL on each agent being used to improve the whole indirectly.

Tokens will eventually become cheaper to the point where it will be possible to actually train proper agents. So we probably will end up with very powerful systems in time, systems that might actually be at least some kind of AGI-lite. I don't think is far off. At most a decade.


Cannot read. Cloudflare: "Just a moment..."

What does the author suggest is the "moral foundation of modern society"


In the past few decades, I learned to be skeptical of any piece of "true" media because it could be easily be photoshopped by an expert. Yet people still gave credence to a damning photo or soundbite shared around. AI has finally made it so easy to fake things that (I hope) people will re-learn skepticism of all they see/hear.

Likewise, I've felt like the meritocracy story that the author sets up as the "moral foundation" has heavily attenuated in this century. It's still used as the justification in America (I'm rich because I deserve it, you're not rich because you didn't work as hard/smart as me) but it feels like that story is wearing thin. Or that the relative proportion of the luck / ovarian lottery aspect has become so much larger than the skill+hard work aspect.

The trend of the rich getting richer, of them using their power to manipulate the system to further advantage them and theirs at the expense of everyone else, existed before AI burst into the public in '20-21. Maybe, like the fake media, it will finally be the kick people need to let go of the meritocracy trap* and realize we need a change.

* I like the notion of meritocracy, it just seems like America has moved from aiming for that, to using the story of it as an excuse to or opiate for the masses.


This stuff is all a bit much. Fan fiction sites are all “creative content lifted without your consent”. You think J K Rowling consented to Harry x Hermione slash fiction? Or Harry Potter and the Methods of Rationality? Absolutely non-consensual.

All of this stuff is clearly a highly cherry-picked gymnastic exercise to justify a pre-existing position. Classic Elephant and Rider stuff.

It’s the same as support for the snail darter. Same as the story about how groups shouldn’t go out during COVID but BLM protests are fine. And if by some incredible chance it had been the FSF or Brewster Kahle who had produced GPT then you guys would be talking about how information should be unchained because creative work belongs to all Man.

Couching this blatantly motivated reasoning by quoting past philosophers is just such middle-brow woe-is-me whining. Take one look at yourselves in an honest sense. Do you have any principles or will you slave them all to your outcomes?

And now I must repeat the litany lest one assume that my opposition to this kind of balderdash be construed as some kind of political tribalism:

* I don’t think we should destroy endangered species

* I think COVID wasn’t a hoax and does spread in large groups

* I think people have a right to protest if they are discriminated against and that includes the black people at BLM

* I love the Internet Archive and have donated to them


> Same as the story about how groups shouldn’t go out during COVID but BLM protests are fine.

i don't think you have a good grasp of why that it was ok for outdoor protests to happen but people should not go out into crowded buildings. the chance of you getting sick from a protest is much less than the chance of you getting sick from going to an indoor gathering at say a club. getting sick is not a binary on or off. it's exposure time and magnitude vs. your immune system's defenses.


I don't think you have a good grasp of modeling disease spread and are repeating things you read online without comprehension. Repeated exposure from multi-day protests in large groups of 3 person / sq. m. with multiple tent structures shared by people is not the same as 10,000 people climbing K2 one day at a time.

You're ignoring the fact that going outside to protest wasn't something you could just decide not to do, just like buying groceries. BLM protests and gathering indoors for fun are not equal.

You were literally forced to go protest? Interesting.

I think, you're mistaking I.P. and creative effort. Certainly, there must be a reason for people reading, even searching for this stuff, beyond the mention of a well-protected trademark.

I see. What things that I said are no longer coherent when one makes a distinction between IP and creative effort?

Well, let's say, an alternative history novel, is it just ripping off a history textbook or is there more going on?

How about an exact duplicate of the all characters but they have sex with each other?

So, because of certain tech patents, patents and IP are bogus in general? What's the point, then?

(Meaning, if we're making generalisation from worst examples a virtue, there will be hardly an argument left. E.g., as there are buggy programs, LLMs are just yet another bug.)


I'm already talking about how information should be free. The idea that a reader is infringing on the copyright of a work by reading it and learning is ridiculous.

At the same time, if someone designs a robot that prints out copyright-infringing material out of the blue, then they are infringing copyright every time it does so.


You forgot to provide a counter-argument to the author's position while you attacked them personally.

AI & Social Media are only exacerbating the decline of morality in society, spreading it and making it more visible. Morality has been "breaking", objectively speaking, for at least a century, most noticeable with the advent of postmodernism.

This piece is wildly optimistic about the outcomes likely from AI on par with the smartest humans, let alone smarter than that. The author seems to think that widespread disbelief in the legitimacy of the system could make a difference, in such a world.

This is a strange article. AI is the best thing to happen to meritocracy. Previously, knowledge was gatekept. Pretty much everyone has access to ChatGPT in India.

You used to need the privilege of knowing the right people or knowing the right sources. But you don’t need that anymore.

What honestly could be better for meritocracy than AI?


An AI that isn't a subscription service hosted in the USofA

i don't get it, what does USofA or subscription have to do with anything

AI is the ultimate "someone else's computer".

If you're doing work in The Cloud and the servers are in America, you can still move all that to physically reside in whatever jurisdiction you live in. The tech is there.

But if you build stuff on top of Anthropic, OpenAI or Google AIs, their data resides and will always reside in the US. They have very specific laws that allow them to grab data from anywhere and nobody can say shit about it.

There's no realistic way for anyone to run "ChatGPT" at home. You can get pretty close with the 200B++ param models but they're still just a pale shadow of what an acre of industrially cooled compute will do in comparison.

But if your risk modelling doesn't flash a big red light about building a business on top of someone else's hardware and software that is physically in a currently very unstable and legally unreliable country, that's fine.

I've been using my VC subsidised LLM credits to build local-first systems I can still use when either the money runs out or the political craziness crashes down.


But what does this have to do with meritocracy???

Is it a meritocracy if you need to pay a monthly fee for a chance to access it?

Someone in northern India definitely can't afford a Max tier Claude subscription, no matter how smart they are. Hell, even many so called middle class people in "the west" can't justify the cost.

That's not a "meritocracy" where people advance purely on merit.


This argument forgets one very important fact:

"We’re in a narrow window where institutional choices still matter, where there’s enough distributed economic and political power to reshape who controls AI infrastructure and how its benefits flow."

China has AI infrastructure and has made it very clear how its benefits flow. Not to people, "despite" China being communist. In other words we can't do this unless we were to stop China first, or China honestly cooperates, which they've never done. And if you don't do that, then your institutions can only sabotage our side.

You want to exercise control, fine, but first need to have that control.


What philosophical foundations are left? Even at the very top the president is corrupt and morally depraved.

Leaders are rarely shining moral beacons.

Are you suggesting that no meaningful distinction can be made in terms of morality between Trump and, say, Biden, Obama, Blair, the Bushes etc.?

Some people will point to their supposed crimes or immoral actions while in office, having to do with execution of their duties as president. Large countries tend to do many questionable things. But the current US administration is pretty unique in terms of its corruption, avoidance of accountability, authoritarian and fascist tendencies, etc.

It's not a useful contribution to the discussion to essentially claim that "they're all the same" without making some sort of case for it.


I'm suggesting that the morality of our leaders is not a barometer for the morality of the populace.

In a democracy there’s a correlation.

> But the current US administration is pretty unique in terms of its corruption, avoidance of accountability, authoritarian and fascist tendencies, etc.

I've mostly been reading that since the Bush years. Definitely said against Bush, Obama, Biden, and Trump. In fairness, I don't remember it about Clinton.

That you don't agree with a politician doesn't make him or her particularly worse than others.


He’s pretty objectively corrupt in regards to universal values of fairness, honesty, empathy etc. by an order of magnitude compared to a lot of recent presidents.

I would argue that yes, there’s no meaningful distinction between Biden and Trump. (And perhaps that Trump is more moral than Biden.)

I’d find your argument would be more persuasive if you outlined what you believe Trump had done worse than the others — rather than argument-by-name-calling.


There’s no point in discussing anything with someone that holds your position, against the enormous amount of evidence to the contrary. The best we can do is take action to prevent you from exerting any influence over society.

So it should be easy to show me, then. Go ahead, be specific.

Rather than accuse me of bad faith based on stereotypes.


That would be a waste of my time. There are more effective ways to fight positions like yours, and I'm involved in several of those.

Racist, dishonest, nepotistic, war mongering, anti meritocratic, fraudulent, vindictive, unscrupulous, corrupt, divisive, pedophile etc.

The moral comes from the grassroots as power corrupts.

As long as we wait for a godlike leader for rescue the end result is same as with Stalin, Hitler, Trump, Thiel, Epstein, Musk, ...

The godlikeness can come though in many forms, political (Trump), propaganda (Musk/Zuck7Thiel) or via extortion material and money like Epstein.

A good litmus test for a decision maker is the universal ethical principle mentioned in the article put into concrete and compare everything via the lens of "what if all eight billion of current humans and also the future generations would do this".

Right now nobody's daring to to this but as long as we start asking "who's afraid of the narcissist zillionaire" the world starts to make sense and the solution appears.


Power reveals rather than corrupt: it's easy to act moral when there are consequences (real or imagined) when you don't, but you see someone's true self when they know they could get away with it.

For example, this is why the way someone treats service workers is a good indicator of someone's character.


Ye like a guy at Walmart and a cop. The leeway to mess with you is off by orders of magnitude.

Yup.

I'm looking this via the lens of moral development where:

Pre-conventional level is the narcissist me-me-me level, that seems to dominate the geopolitics and tech.

Conventional is most of us as the sheep. This level follows the loudest crowd that right now is the pre-conventional.

Post-conventional is the few that can do standalone thinking and morals.

Most conventionals can though understand the difference between and also the outcome we're headed to with the pre-conventional human gods, but we need to build the normalcy for the post-conventional ones together and make it structural.

My hunch is that first step could be to start the discussion on what is excessive on personal level. Consumption, wealth, political power.

Something like Mamdani or Polanski have showed, only more blunt. The majority of people are waking up that the current trajectory means the end of the world and extinction after the short period of accelerationist-dystopian hellscape.


Ownership of things by humans is never a settled question. There is no ideal or correct model for ownership, as ownership itself is unnatural. People went through many models over centuries - famliy/community owneship, monarchies, socialism, capitalism etc. Capitalism is the worst of all, allowing extreme exaggeration of talent differences between people. AI is like back to monarchies.

AI is exposing the myths of talent and erasing differences between humans making them all a uniform array of subjects. However this erasure comes at the cost of return to monarchy style of economic model, where wealth would be moved from common population to the owners of AI or the neo-monarchies.


This is an interesting take that I don't think I have come across before. Thanks. Would you happen to have any further reading material on this topic either by yourself or others?

James C. Scott’s Seeing Like a State is a useful lens here. Scott argues that modern states flattened human complexity into legible categories so they could govern: surnames, maps, censuses, standardized occupations. You lose a lot of nuance, but in theory the trade-off is that the state can then build institutions that serve the public.

AI feels like a more extreme version of that flattening, but without the civic purpose that justified it. You end up with legibility without legitimacy. That’s the part I think we don’t have a good framework for yet.


Thanks I'll pick it up and finally read it. I've had it on my to-read forever but have never got around to it.

Modern society has no moral foundation.

So much of this article has two implicit tenets; 1) information can be "owned", and 2) copying information reduces its inherent value (which as I'll explain follows from the first).

So as a preface, I would argue than in the marketplace for information (ideas, content, works, etc if you will) that value has two components, 1) utility value, and 2) scarcity value (which I would argue has two components, availability value (information horizon[0]) and saliency value (preference as a result of the information horizon[1]). Obviously the rise of copyright over the last few centuries attempts to limit commerce of ideas based somewhat loosely on the concept of (real, physical) property law (itself a glorified codified extension of the animal behaviour of territoriality, aka survival by means of securing resources); ownership is a moral value autogenous to an idea's sui generis expression on, by, or through a recorded medium. In other words the mere act of recording an idea in a "creative" fashion brings with it a "moral" ownership of that idea. Of course the "creative" and "moral" parts from which the law prescribes and proscribes these limits is debatable. The legally mandated "monopoly" of an idea (even though the marginal cost of replication approaches zero) provides this scarcity. So in the end because of this the scarcity, and the resulting "rent seeking" (corporations seeking to charging (typically over time) more than the cost to create and distribute information), are in effect value (read wealth) redistribution.

So, what is more "moral"? Using a constructivist approach, which is more socially acceptable (and hence far more likely to be codified into law)? broad access to information distilled to the point that sui generis is "removed" so that concepts (which in general can't be copyrighted) are freely available for transformative use, or extending the "moral right" of copyright, now that information distillation is so cheap, to include the ability to "copyright" an idea rather than the expression of it. Or are we seeing a "breakdown" of the codification of the law where the granularity of the spectrum is so fine it becomes so costly to enforce such the "value" of the law in reducing the transaction costs of daily social interaction in the information sphere that we revert to pre-copyright behaviours, i.e. information hoarding.

[0] https://www.researchgate.net/publication/261773523_Informati... The concept is quite simple there is a limit to the amount and quality of information that a) a human can get access to (and following what they prefer / value), and b) relatedly the amount that can be synthesised for actual use. [1] https://www.sciencedirect.com/science/article/abs/pii/S07408...


Or maybe it's just a bubble and once it pops it will be seen as a useful tool for some things, and the world will keep going on.

I opened 10 PRs in 20 minutes today and it felt great. If I extrapolate that to everything else with a straight line then everything looks good /s

I fixed a bunch of nitpicky PR comments just by saying "PR 69 has some comments, fix the issues" to Claude.

It opened the PR automatically, took the comments and applied the fixes (renaming a few things, simple refactoring and added a few unit tests).

Effort from me: maybe 5 minutes going through the PR comments and deciding they were super simple monkey work and 10 seconds typing instructions for Claude.

Then maybe 5 minutes of checking the work, commit and push.

All of this could've been done with a Github integrated Claude tbh without me in the loop more than accepting the changes one by one.


> maybe 5 minutes going through the PR comments and deciding they were super simple monkey work

There’s an old joke about a mechanic banging a hammer on a car and charging $1000 for it.

Man us engineers severely undervalue each other’s worth huh. One thing all this AI hype has taught me is I’ve started to be very conscious of what my value is. It’s a hard habit to break.


I've noticed how much of my daily work is just busywork that can be automated today.

I kinda enjoy it, I can focus more on planning, thinking, networking and solving actual problems rather than typing 1000 lines of unit tests by hand and then doing a stupid copy-paste mistake somewhere when I try to speed things up and spend 2 hours debugging it. (True story)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: