I write StayGrounded.online a newsletter about establishing healthy boundaries with the digital world.
Mastodon twit.social@JustinH
PixelFed Pixelfed.social@JustinH
I look at that as as proof it wasn’t written by GPT.
Yeah. People should have a right to speak their mind, but on the Fediverse nobody is forced to listen and therein lies the difference, IMO.
The success metric is a vibrant, happy community, not MAUs or engagement numbers, so they make decisions accordingly.
YES well said. An instance is measured by it’s quality, not it’s profitability.
Any civility rule that is enforced with greater priority than (or in the absence of) a “no bigotry” rule serves only to protect bigots from decent people.
There’s a saying I think about a lot that goes “The problem with rules is that good people don’t need 'em, and bad people will find a way around 'em”.
The best thing about human volunteer mods vs automated tools or paid “trust and safety” teams, IMO, is that volunteer humans can better identify when someone is participating in the spirit of a community, because the mods themselves are usually members of the community too.
Yeah, I think it’s important to keep in mind that the Fediverse doesn’t solve any of the problems that come up when a bunch of people talk about stuff they’re passionate about. The problems Federation solves is the incentivizing and spotlighting of the sorts of toxic behavior we see on corporate social media.
If a Fediverse instance grew so big that it couldn’t moderate itself and had a lot of spam/Nazis, presumably other instances would just defederate, yeah? Unless an instance is ad-supported, what’s the incentive to grow beyond one’s ability to stay under control?
I fear if these federated systems do grow popular enough
If an instance did grow “too big to moderate”, it would surely be defederated from, yeah? I’m struggling to think of a situation where responsible admins from well-moderated instances would willingly subject their users to spammers from an instance (no matter how big) that can’t control itself.
The key word here is “large”. From the article:
“[Fediverse] instances don’t generally have any unwanted guests because there’s zero incentive to grow beyond an ability to self-moderate. If an instance were to become known for hosting Nazis —either via malice or an incompetent owner— other more responsible instances would simply de-federate (cut themselves off) from the Nazi instance until they got their shit together. Problem solved, no ‘trust and safety’ required”
Very well said all around, (and in many fewer words than it took me) I may actually quote you in the future! Hadn’t seen that 2018(!) Esquire article before today either. Kind of sad “Twitter without Nazis” wasn’t a more compelling selling point. Just speaks to the power of network effects, I suppose.