I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country. So in a way, he was probably the first account on there and became god-king for eternity for the subreddits related to the country. I had no idea who he was, what he stood for, what his plans were for his newfound digital real estate etc.
I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.
Your country wouldn't be Norway by any chance? I remember that on Reddit there was one powermod who was dead-set on owning every Nowegian-language forum, and every name that could potentially be a base for people trying to escape him.
You need both. LLMs can, I think, do the bulk of removing posts that break community guidelines, but you need moderators to define and adjust the guidelines. Most would also like to have a human to escalate a dispute to.
Google is famous for having almost solely automated support, at it absolutely sucks at doing almost anything. AI only moderation would go the same way.
> but you need moderators to define and adjust the guidelines
The comments above you are suggesting that global guidelines are unnecessary. Instead, they suggest you don't need moderation at all when LLMs now give us the technology to filter out the stuff individual users don't want to see based own their own personal policies. I am sure you can come up with reasons to dispute that, but "you need moderators to do the thing you say is no longer necessary" doesn't add to the discussion.
The absolutely broken moderator system of Reddit made me leave it forever after being a regular user for more than a decade. The “god-king” thing simply doesn’t work.
Same here. The power-tripping of mods ruins reddit. Most don't care about the community as much as they care about exercising their absolute power over users.
And even if it does, the mods don't have real control to moderate communities either, so you get the worst of both worlds. I don't go to most queer reddit communities anymore because a lot of them have bots that downvote trans-positive posts, even if the community is specifically meant to be inclusive. There's nothing to couple active participation to voting weight or anything of that kind and voting is not considered "brigading" by reddit if the coordination happens off-site (at least not in a way that'd lead to any enforcement action).
It's makes a great propaganda machine though, given humans have a tendency to measure their own opinions on social clues.
I still haven't been able to figure out how to make an account without it being immediately shadowbanned or normalbanned. Tried again the other day, it was something in between where logged-out users could see it was banned but I couldn't.
You need to ditch and replace all your devices and acquire a new phone number. I'm serious. Virtually all large websites these days employ a lot of fingerprinting and persistence technologies.
And yes, ditch them. Even well over a decade ago, Wikipedia of all places already employed IP address matching to link sockpuppet accounts. You must be extremely careful of never using any device that was associated with your old accounts on the same network as the devices associated with your new account. And that includes devices only seen by association.
It happens to all new accounts. It's known that new account are shadowbanned almost everywhere until they are 30 days old and farmed some karma on a very small set of subreddits that don't shadowban new accounts. It's shocking they ever get any new users, really; as far as a non-technical new user knows, nobody ever reads their comments for some reason.
It's full of bot slop pushing political propaganda, it's possible those bot farms have monetary agreements with Reddit to allow them to create accounts.
My boss uses Reddit some. I'm banned. At the shop, we use the same IP address (and we do not use ipv6 there).
I tried to log in with a ~10-year-old account that I'd never commented with. A perfect Beetlejuicing moment had arrived and I just wanted to play the game with a short, snarky comment.
It logged in fine, and then: Insta-ban, just like that. (Maybe I should have used a new browser on a new network that I've never used before, but whatever -- nothing of value was lost here.)
Meanwhile, the boss man's access continued unimpeded; this suggests that it is a rather targeted contagion.
And it seems to follow the systems, not the networks.
(If anyone wants banned, just let me know. I seem to have a well-poisoned system to play with.)
Your concept is certainly is interesting to think about, and I think that is a clever approach with a high cromulence quotient. I like it quite a lot.
But in this world, the result of the approach would have no value to me. At the end of the day, Reddit is terrible. For me from my perspective, it can never be anything other than terrible. Studying its ways cannot redeem it, nor improve my life in any way. Seriously, fuck those guys.
So while I appreciate the suggestion, I must respectfully decline.
(Unless, of course, the result would be useful for others such as yourself. If that is the case then let me know and I may elect to spend some time on it.)
>They banned the_donald (which, yes, was spammy, but it seemed to be organic
I used to frequent /r/t_d when it was created, before the Republican primaries for the 2016 election. I visited every day because I was absolutely astonished at the gigantic marketing effort behind it. I had never seen anything like that before, and haven't since. It probably had a team of dozens or hundreds of Russians behind it, creating memes and shitposting on a payroll. And it obviously was 100% inorganic.
I'm actually ok with reddit banning it and taking sides in political conflict. I just wish they didn't pretend to be unbiased when it's made it a useless site for discussing current reality.
Edit: to be clear, I'm more concerned about how russia was basically banned from the site but worldnews itself seems like the primary fountain of western astroturfing on the internet. No matter your opinion of putin, that is extremely unhealthy for productive discourse. I don't care about american domestic politics.
Who decides what "hate" is though? Does it switch with every administration?
Free speech, including "hate speech" should be allowed, as long as it doesn't violate the law (calls to violence, etc)
The particular problem is said speech quite often leads to calls of violence. And when a few people get banned for that you get dog whistles, sentences that are encoded calls for violence. Eventually the new slang is recognized for being violent and then it looks like the site has allowed calls of violence for months.
A short version of this is, if you let a nazi come to your bar, you have a nazi bar.
Calls for violence are free speech. Calls for "imminent" violence that serve to coordinate it have been decided not to be.
When you claim that calls for violence are not freedom of speech, it's a slippery slope that leads you to absurdities like speech that could "lead" to calls of violence are not freedom of speech, or that secret codes that could be interpreted as speech that would lead to calls to violence are not freedom of speech, or that violent-sounding slang that is eventually recognized as being encoded speech that would lead to calls of violence isn't freedom of speech, or that people who own bars who host people who use violent-sounding slang that is related to secret codes for speech that could lead to calls for violence are nazis.
And since nazis deserve to be violently suppressed...
I agree that free speech is free speech, the private org that runs the platform has a veto, the assumption that these platforms are the equivilant of stepping into the street to stand on a box is a not realistic.
Even HN is only quasi-free speech, there are rules that will get one censored.
If you love freedom, there are mailing lists and other platforms but they arnt as high on dopamine and the audience gets a little bit more sketch.
Even the US never had free speech—there was always stuff you could/can say to get you gagged by the courts or thrown in prison. Your freedoms always stop at impacting other people.
Somehow we jut gave business owners more freedoms than we gave everyone else....
I guess I don't have a problem with a social media site blocking speech, we don't have to use them, if they are too draconian, nobody will.
But IRL it gets harder if ISPs get involved. I'm more interested in democratized platforms with privacy baked in, if you want free speech you might have to at least give the orgs you depend on for access plausible deniability
"Free speech" means you have freedom from retribution from the government. It doesn't mean your fellow citizens need to stand there and listen to your shit, nor does it mean you are entitled to any sort of platform or megaphone. It means you can scream on the side of the road into the ether and you won't be arrested for it.
> "Free speech" means you have freedom from retribution from the government.
No, it doesn't. The concept of "free speech" isn't limited to prior restraint, you're mistaking it for the dominant precedent in judicial interpretations of the the 1st Amendment of the US constitution.
> It doesn't mean your fellow citizens need to stand there and listen to your shit,
Nobody asked you, or claimed this.
> nor does it mean you are entitled to any sort of platform or megaphone.
You should look up common carrier previsions. If we had to depend on your interpretation of law or morality, they'd be able to shut off your electricity for speech violations.
> It means you can scream on the side of the road into the ether and you won't be arrested for it.
If that's all it meant, it would be dumb and useless. What's more, it doesn't mean that, you can be arrested for screaming on the side of the road.
I agree on all counts. But the Donald was banned for mostly on topic posts. Reddit is a private business and they can do what they want, but there's consequences to their actions too. reddit has become an echo chamber now.
It's either some personal unquenched thirst for power or he thought that new digg will be as popular as these ~20 years ago, and that he'll be able to control content submitted and get paid for "promoting" it.
I've seen something similar over the last ~17 years: a bunch of same terminally online accounts uploading content from our local media outlets on country-related subs and local digg-like sites - both active and long defunct for 10 years now. Some of those users even appeared on mastodon and bsky.
The social link aggregators were created for people to share their favorite links, places from the Internet so others could see these and have fun, expand their knowledge and so on. For me it was the cherry on top of the web2.0 period where everything was fresh, beta and innocent. That lasted for a while up until other people, entities figured out that such sites can be used to promote their content, insert ads. The next stage was and remains till today opinion control by "curating" the content and/or reactions in discussions - still done by humans but more prevalent presence of convincing bots.
Reddit itself lost its impartial and independent status a while ago. Big subs related to media franchises or big corporations are heavily controlled to the point it's impossible to submit content that's critical. It's all happy world seen by pink glasses, or as some say toxic positivity.
There are still niche places where moderation is limited but as I said last time, from my own experiences: such subs were targeted by bad actors who by submitting forbidden content tried to impose lockouts so later they could take these in their control.
hn isn't free of some of these issues either. while discussions still remain on good levels (tho degradation to reddit levels already happens), there's no control over content: there are accounts who do nothing but upload links every few minutes, hours.
I'm not sure if it's possible to have link aggregators or multi-thematic forums that could be free of such... issues. The similar problem with establishing "real estates" happened on lemmy when some part of userbase decided to abandon reddit due to controversial changes.
An outstanding summary of the most important trends on the web, yes, it's being turned into a one-way propaganda-pushing machine much like the mass media before it. AI and bot-farms made that transformation cheap and ubiquitous, the profit motive, aka bribery, takes care of the rest.
I don't think it's an unsolvable problem although new legislation is continuously being considered in order to make the solution harder. Still, not impossible.
A well moderated forum (like HN) is great. I don't have time for the signal-to-noise ratio of X.
IMHO Reddit would be better if it had AI moderators that strictly follow a sub's policies. Users could read the policies upfront before deciding whether to join. new subs could start with some neutral default policy, and users could then propose changes to the policy and democratically vote on those changes.
If the policies are public, there's a lot more transparency. eg my city of millions of people has a subreddit. The head mod bans people for criticizing a certain dog breed. This "policy" is pretty opaque, but if the AI enforced subreddit rules say "thou shalt not mention the dog's breed when commenting on articles about someone being mauled to death", more people would be familiar with the rule (and perhaps there would be more organized discussion).
I was on a subreddit for a while that voted on rules and had a rotating dictator to facilitate them. It worked decently well, although it never got to the point where the sub was brigaded. This was also pre-LLM so moderation was still a big time sink and the sub eventually fizzled out
Honestly the most cohesive experience I've found online are forums with strict, sometimes overly strong, moderation where it's allowed to also complain about the mods openly. They can have the power but at least let me bitch when they fuck up.
On Google+, it was possible to individually block specific profiles.
This meant that the blocker wouldn't see the blockee's posts and the blockee wouldn't see the blockers, which is pretty much expected behaviour.
But on third-party threads, if a blocker/blockee were both commenting, others could see their comments but they'd be mutually invisible. As the platform matured and the number of such blocks increased, this reached a point where that platform behaviour became common enough that it was frequently commented on. If the thread host isn't sufficiently diligent in their own moderation (effectively each post author is moderator of that thread), it's also possible for such discussions to devolve quickly.
I guess Usenet would be another case where individual killfiles were often applied.
This isn't quite the same as your proposal, but it does raise the challenge that if there are multiple moderation regimes occurring, there is no canonical view of a discussion, leading both the potential confusion over what has or hasn't been said, and potential derailment (or similar behaviours) if a sufficiently disruptive participant is not universally blocked. The canonical flamefest after all is often just two profiles / participants responding endlessly.
Diaspora* is similar to G+, except that on third-party threads the blocks don't work, so that if A blocks B but C does not block B, then A and B will see one anothers' comments on C's posts / threads. This ... can be frustrating.
Oh, and the post-author-as-moderator model also somewhat resembles what you'd suggested, in that you could choose to participate on a particular profile's posts given that profile's moderation practices. I found that there were several people who did an excellent job of this, and who were quite affective, in effect, salon hosts, which was how I saw the G+ moderation model over time. This differs from what you suggest in that every participant on those threads had the same moderation experience, but it was possible to choose moderation practices based on which profiles' threads you chose to participate on. And I'd definitely avoid poorly-moderated hosts.
I think the difference between what I'm suggesting and all of these is that by selecting a mod, you're selecting a auto-updating block list. Behavior would tend toward consistency as good mods would be popular and there is nothing keeping a mad mod around over than momentum.
There have been such blocklists circulating for some time on other platforms, notably Twitter. Those could become problematic where they were adopted without review, and/or those who were listed lashed out all the harder against those they thought had promulgated the lists.
I became aware of this when use of the lists and/or the drama that accompanied them leaked into the Fediverse a few years ago.
The Fediverse also effectively works in ways as a "subscribe to moderation policies" network, in that each individual instance has its own moderation policy and blocklist (individuals and instances), which is probably closer to what you've described than any of the other examples I've noted. This ... has some benefits and frustrations as well, particularly as swapping mods isn't as frictionless as your ideal version would be. There's also the "broken threads" dynamic, similar in ways to that seen on G+, though with the Fediverse (a closer analogue to Twitter) there's no top post, and no original-author-as-moderator dynamic, which means that if a particular thread is interrupted by a blocked profile/instance, the thread as a whole tends to fragment. Devs are aware of this and may be looking at other ways of aggregating threads, e.g., by having multiple "refers-to" type headers (see the Mutt email agent's threading model for more on this).
sadly, a nice idea that is painfully naive with how computers are used in reality.
One need only remember how easy it was to take over IRC channels with a few hundred bots to see the endgame of this rationale… it cannot be patched out, it’s inherent to the internet.
That which would make a vote valid; can (and will) be gamed.
It could work depending on how it is set up. Maybe only accounts with n-number of years get 1 single vote, and maybe don't let any random 2-day old account get a vote.
As long as sub forums can be created easily, users may pick their sub forum and thus indirectly moderator.
In this setup having users elect the moderator leads to cases where small groups create their special interest group and then some trolls challenge the moderator.
Their may be some oversight on the large sub forum, but not all.
Necessary for this is that subforums can't have unique names. If a bad mod can squat all the words like "computers", "programming", "coding", newcomers aren't going to know the best subforum is called "RealProgNoBadMod"
You see this in city-focused subreddits. But the reality is the name is power. New users type in their city and join the original one. The hostile mods suppress mention of the new one. It never manages to get critical mass.
A democratic election requires that the elected be your employee, where you work with him on a regular basis to direct him in his job. That works (ish) in government where people doing the hiring have heavily invested life interests in it succeeding.
Does a subforum offer the same? Once the mod is elected, are you going to sit down with him each day to make sure he is doing the job to your wishes and expectations? I say (ish) in government because it often doesn't even work there, even where people have heavily invested life interests, with a lot (maybe even the vast majority!) of people never getting involved in democracy. A subforum? Who cares?
If there were to be elections, it is unlikely they could be anything other than authoritarianly, with the chosen one becoming the ultimate power.
Crucially, SO's election system needs to be bootstrapped: users aren't eligible to vote until they have a history of participation. The level of participation is fairly trivial, but it provides enough signal to allow a reasonable detection (and elimination) of bot / sock puppet networks without resorting to crude measures like blacklists or "bot tests".
For new sites, this meant that the bulk of moderation was done by employees, followed by employee-appointed temporary moderators. This dramatically reduced abuse, but also reduced the explosion of new sub-communities that sites like Reddit thrived on.
It was pretty decent in the mid and late 00s. The community started turning toxic in the very early 10s and by about 2015 was quite poisonous. The saddest part is that the problem was known and spoken about frequently, but the response to that from staff and/or high-level mods was to just double down and dig in.
For sure, advanced difficult topics were never really their forte', although it was really common to get great book or blog recommendations via comments. For me, the golden combination was a good book on the language/framework/topic I was stuyding, supplemented with specific Q&A from Stack Overflow. I have extremely fond memories learning C++ and Qt that way (although that Qt book was a little rough, but at least there was a Qt book. Nowadays every book just seems too outdated to be helpful).
Internet is way behind on democracy. In general everyone likes democracy until they're in charge, then they realise they're the best person to be in charge and the idiots who vote don't have a clue, and should probably be banned if not beheaded for speaking out of turn.
You'd have to weight votes by some kind of participation metric to solve the problem of very little authentication of the voters
I've always thought than on Reddit (or Digg, or Lemmy or others) common words, brands, names... should be broad "topics" or categories that nobody can claim (first come, first served). You should be able to add a sub/community under a topic, but just like everyone else, and then users interested in said topic could add and exclude different subs to taste.
I always thought it would be interesting to separate the post-side and the read-side in such a manner. You'd post to #programming, and then the reader would subscribe to #programming/user_xyz to pick up the moderation feed with xyz as the god-mod. This solves the bootstrapping problem where new subs have nothing to read. Unfortunately it's hard to do persistent standards keeping that way. If xyz has a no-memes policy do you ban all posts from everyone who ever posts one to the global tag, or do you individually inspect every post?
>> I recently activated my account on there and went to the forum for my country. It was already taken over by moderators. Then I looked at the mod and he took all real estate that is already available on Reddit that is related to said country.
Are you sure? My understanding is that accounts were only allowed to create two communities.
I feel like the moderated subforum is a fundamentally broken system for dealing with content. I much prefer the Federated / X / Instagram approach where I can deal with users and have the tools needed to curate my own content, instead of relying on some ideologically captured no-name account that chooses what I can or cannot see based on whims.