Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.
And how did you discover those feeds in the first place? Or find new ones?
I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.
> That's just means you're effectively acting as a moderator yourself, only
> with a whitelist. It's just your own direct curation of sources.
That's exactly how a useful social information system works. I choose what I want to follow and see, and there's no gap between what moderation thinks and what I think. Spam gets dealt with the moment I see something spammy (or just about any kind of thing I don't want to see).
This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.
This is how IRC worked: you joined channels that you deemed worth joining. And you could further ignore individuals that you didn't like.
That is how the whole original internet actually worked: you were reading pages and using services that you felt were worth your time.
Ultimately, that's how human relationships work. You hang out with friends you like and who are worth your time, and you ignore people who you don't want to spend your time with, especially assholes.
> That's just means you're effectively acting as a moderator yourself, only with a whitelist
Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.
> And how did you discover those feeds in the first place? Or find new ones?
The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.
ActivityPub allows one to follow hashtags in addition to accounts. Pick some hashtags of interest, find some people in those posts to follow. Lather, rinse, repeat.
I think it's the act of creating an access point that allows posting when you get spam, not necessarily if it's curated. Your email isn't a curated feed but it will get tons of spam because people can "post" to it once they get your address. Sane with your cell phone number and your physical mailbox.
Since a community requires posting and an access point, spam is pretty much inevitable.
Yeah I'd agree with that. In addition to being a list of content I subscribed to, an RSS feed benefits from being pull based. Email is push based, that breaks the self-moderation model
I think you are restricting social media by defining as what it became (at the time driven by "eyeball" metrics), instead of defining it by what it could or should be.
Well that depends on how we define social media. Facebook started out as a chronological feed, did it only become social media once it began algorithmically curating users' feeds?
I think it became social media when it enabled two-way/multi-way messaging, if that wasn't there from the start. If it was originally just a feed of posts, yeah it wasn't really social media, it was just another form of blogging.
IIRC twitter was originally called a "micro-blogging" platform, and "re-tweeting" and replying to tweets came later. At that point it became social media.
blogs often have a place for comments. twitter was a microblog that elevated comment replies to "first class tweet status" as a continuation of the microblog idea
Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.
I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.
For the record, people working on decentralization should not stop working on it. For myself, I have moved on to other approaches with different goals, but it's a worthwhile endeavor and if anyone ever cracks it, it'll change the damn world. And the people working on it understand exactly how difficult it is, so nothing I say is news to them.
But everyone should be clear-eyed about it. It's not a panacea, it's complicated on much more than a technical level and it's already incredibly complicated on a technical level.
And even if it works, there will still be carry-over of many of the problems we've seen with centralized social networks.
How do you decentralize a network that relies on dictionary semantics, the chaos of arbitrary imagery, basics of grammatically sequence signals?
It's oxymoronic. Our communication was developed in highly developed hierarchies for a reason: continual deception, deviance, anarchism, perversion, subversion always operating in conflict and in contrary to hierarchies.
Language is not self-organizing, signaling is not self-learning it self-regulating. The web opened the already existing pandora's box of Shannon's admittedly non-psychologically relevant info theory and went bust at scale.
Dunno necessarily if they are _forced_ to expose that data.
Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.
Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.
Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).
In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.
If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.
In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.
If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.
I'm a consultant that builds for startups. I'm not an entrepreneur myself.
If I were to build something like this, I'd use a services non-profit model.
Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.
I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
>Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
I disagree that federation is "specifically political decentralization" but how so?
You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.
There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.
This seems like a better solution than everyone being subject to the whims of a centralized service.
To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.
| if you don't like the way Zuckerberg and Musk run things, too bad
It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.
You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.
Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.
I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.
However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.
> Ultimately, big problems with an open social web include:
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
> [...] show people posts from the people they follow or that the people they follow liked.
Yes, this is a good system. It'll work particularly well at filtering spam because people largely agree what it is. One thing that will happen with your system is people will separate into cliques. But that's not the end of the world. Has anyone implemented Anthony's idea of using followees' likes to rank posts?
>You need some way of distinguishing high quality from low quality posts.
Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.
>This is a completely independent problem from spam.
Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.
>It's not clear what these are but they sound like kind of the same thing again
I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)
- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.
It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.
- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see
Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.
That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.
We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
> Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.
> The latter poisons a site before it can really start to curate to its users.
That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.
> ensure you know when a human or a bot is posting.
This is not possible and you should not attempt to do things that are known not to be possible.
It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.
What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.
> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.
This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.
> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.
If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.
> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.
> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.
No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.
What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
Maybe when you get to the scale of Reddit it becomes the same problem. But a fledgling community is more likely to be dealing with dozens of real posts and hundreds of posts of spam. Even then, the solutions differ from the problem spaces, so I'm not so certain.
You can't easily automate a search for "quality", so most popular platforms focus on a mix of engagement and similarities to create a faux quality rating. Spam filtering and removal can be fairly automatic and accurate, as long as there's ways to appeal false negatives (though these days, they may not even care about that).
>This [ ensure you know when a human or a bot is posting.] is not possible and you should not attempt to do things that are known not to be possible.
Like all engineering, I'm not expecting perfection. I'm expecting a good effort at it. Is there anything stopping me from hooking an LLM to my HN account and have it reply to all my comments? No. But I'm sure if I took a naive approach to it that moderation would take note and take action on this account.
my proposal is two fold:
1. have dedicated account types for authorized bots to identify tools and other supportive functions that a community may want performed. They can even have different privileges like being unable to be voted on (or to vote).
2. action taken on very blatant attempts to bot a human account (The threshold being even more blatant than my above example). If account creation isn't free nor easy, a simple suspension or ban can be all that's neede d to cub such behavior.
There will still be abuse, but the kinds of abuse that have caused major controversies over the years are not exactly subtle masterminds. There was simply no incentive to take action once people reported them.
>This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does
Probably not. That kind of verification is more domain specific and that's an extreme example. Something trying to be Blind and focus on industry professionals might want to do verification, but probably not some casual tech forum.
It was ultimately an example of what transparency suggests here and how it differs from verification. This is another "good enough" example where I'm not expecting every post to be fact checked. We just simply shouldn't allow blatantly false users or content to go about unmoderated.
>What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
Yes. This is why I don't expect such a solution to be solved by corporations. Outside of the brief flirting with Meta, it's not like any of the biggest players in the game have shown much interest in any of the topics talked about here nor in the article.
But the tools and people needed to make such an initiative doesn't need millions in startup funding. I'm not even certain such a community can be scalable, financially speaking. But communities aren't necessarily formed, ran, and maintained for purely financial reasons. Sometimes you just want to open your own bar and enjoy the people that come in; only caring about enough funds to keep the business running, not attempting to franchise it through the country.
>We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
If you want a platform that doesn't remove anything except the outright illegal, I don't think we can really beat 4chan. Nor is anyone trying to beat 4chan (maybe Voat still is, but I haven't look there in years). I think it has that sector of community on lock.
But that aside: any modern community needs to be very opinionated on what it allows and doesn't upfront, in my eyes. Do you want to allow adult content and accept that over half your community's content will be porn? Do you want to take a hard line between adult and non-adult sub-communities? Do you want to minimize flame wars or not tend to comments at all (that aren't breaking the site)? Should sub-communities even be a thing or should all topics of all styles be thrown into a central feed and users get to opt in/out of certain tags? Is it fine for comments to mix in non-sequiturs in certain topics (e.g. Politics in an otherwise non-political post?). These all need to be addressed not necessarily on day one, but well before critical mass is achieved. See Onlyfans as a modern example of that result.
It's not about capital-C "Censorship" when it comes to being opinionated. It's about establishing norms upfront and fostering around those opinions. Those opinions should be shown upfront before a user makes an account so that they know what to expect, or if they shouldn't bother with this community.
A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.
The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.
Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.
That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.
But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.
I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.
It is interesting how it became a norm to just blindly assume the more decentralized something is the better it is. There isn’t any evidence this is true. Reality isn’t so reducible.
it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.
Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.
You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.
"The 19th reports at the intersection of gender, politics, and policy - a much-needed inclusive newsroom..." This isn't a problem with the distribution technology. This is a problem with the message, and its narrow niche.
The site's marketing is geared towards collecting donations in the US$20,000 and up range.
That doesn't scale. They don't have viewer counts big enough to make it on payments in the $10/year range. So that doesn't scale either.
The back-end technology of this thing has zero bearing on those problems.
To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)
I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What
we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially
interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.
I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
> But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
This is true, of course. I'm here interacting with strangers. But, for me, HN is about discovery not community like what the article talks about. I'd be just as content not posting if the ability wasn't there. I just don't agree that social media is that important.
I personally think what the article talks about is already available in the form of group chats on platforms like signal. My impression, from the article, is the author is extremely politically motivated and seems to believe social media is somehow a good thing, as long as the people they don't like can't control it , and likely can't use it? That last point might not be true.
Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.
> By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
Social media has two functions: chat (within groups/topics/...) and discovery (of groups/topics/...). So unless we rely only on IRL discovery, we need a way to do discovery online.
Discovery is probably the main problem social media creates. Almost all of these problems solve themselves when you remove discovery. If someone in your friends group chat is spamming porn you just remove them. There's no need for the platform to intervene here, small groups of people can moderate their own friend groups.
Once you start algorithmically shoving content to people you have to start worrying about spam, trolling, politics, copyright, and all kinds of issues. The best discovery system is friends sharing chat invite links to other friends who are interested.
Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
---
We all know about Twitter acquirer Elon Musk, who bent the platform to fit his political worldview. But he’s not alone.
Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.
Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.
And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.
Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian.
---
This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.
While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.
The name killed it. If you know what it means, it doesn't bear any relevance to social media. If you don't know what it means, it sounds like a gastric disorder.
Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with "Nazis" just further alienates people.
I read this comment, went back to the article, and then came back to this comment. I have no idea what niche political agenda you're talking about- the message of the article is basically "solve problems your users are actually facing, not problems you think they have".
You can apply the concepts the author talks about to _literally_ any group that would make use of social media.
>solve problems your users are actually facing, not problems you think they have
>You can apply the concepts the author talks about to _literally_ any group
The presentation could have been modified to avoid alienating people if the author had focused on champinioning how open social media allows for the ability to solve these problems.
>I have no idea what niche political agenda you're talking about
Search the page for "Why should anyone care?", and you'll see it. Thos section of the talk he complains that the political situation of America doesn't match his views. Then in the next section, "The capitulation of social media", he complains about how other social media sites don't match his politics. Then in the next section, "The decline of journalism", the talker tries to argue his political opinion that journalists are a good thing. Then in the next section, "The problem is global", he explains that more places than just America don't share his political view.
I'll stop here, but it goes on further, even to the very last sentence. I thought this was supposed to be a technology keynote, but this talker turned it into a place for him to complain about the political situation of the world.
Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.
Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.
But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:
- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency
These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.
In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.
A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.
That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.
And how did you discover those feeds in the first place? Or find new ones?
I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.
> That's just means you're effectively acting as a moderator yourself, only > with a whitelist. It's just your own direct curation of sources.
That's exactly how a useful social information system works. I choose what I want to follow and see, and there's no gap between what moderation thinks and what I think. Spam gets dealt with the moment I see something spammy (or just about any kind of thing I don't want to see).
This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.
This is how IRC worked: you joined channels that you deemed worth joining. And you could further ignore individuals that you didn't like.
That is how the whole original internet actually worked: you were reading pages and using services that you felt were worth your time.
Ultimately, that's how human relationships work. You hang out with friends you like and who are worth your time, and you ignore people who you don't want to spend your time with, especially assholes.
> That's just means you're effectively acting as a moderator yourself, only with a whitelist
Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.
> And how did you discover those feeds in the first place? Or find new ones?
The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.
> you're effectively acting as a moderator yourself
Honestly, that's how things should work. People should simply avoid, block and hide the things they don't like.
ActivityPub allows one to follow hashtags in addition to accounts. Pick some hashtags of interest, find some people in those posts to follow. Lather, rinse, repeat.
I think it's the act of creating an access point that allows posting when you get spam, not necessarily if it's curated. Your email isn't a curated feed but it will get tons of spam because people can "post" to it once they get your address. Sane with your cell phone number and your physical mailbox.
Since a community requires posting and an access point, spam is pretty much inevitable.
Yeah I'd agree with that. In addition to being a list of content I subscribed to, an RSS feed benefits from being pull based. Email is push based, that breaks the self-moderation model
A simple chronological feed of content is not social media though. That's just reading authors who you like.
Yeah that’s what social media was 10 years ago. It was better, more like a big sprawling group chat than a stream of engagement bait.
I think you are restricting social media by defining as what it became (at the time driven by "eyeball" metrics), instead of defining it by what it could or should be.
Well that depends on how we define social media. Facebook started out as a chronological feed, did it only become social media once it began algorithmically curating users' feeds?
I think it became social media when it enabled two-way/multi-way messaging, if that wasn't there from the start. If it was originally just a feed of posts, yeah it wasn't really social media, it was just another form of blogging.
IIRC twitter was originally called a "micro-blogging" platform, and "re-tweeting" and replying to tweets came later. At that point it became social media.
blogs often have a place for comments. twitter was a microblog that elevated comment replies to "first class tweet status" as a continuation of the microblog idea
Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.
I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.
I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.
For the record, people working on decentralization should not stop working on it. For myself, I have moved on to other approaches with different goals, but it's a worthwhile endeavor and if anyone ever cracks it, it'll change the damn world. And the people working on it understand exactly how difficult it is, so nothing I say is news to them. But everyone should be clear-eyed about it. It's not a panacea, it's complicated on much more than a technical level and it's already incredibly complicated on a technical level.
And even if it works, there will still be carry-over of many of the problems we've seen with centralized social networks.
How do you decentralize a network that relies on dictionary semantics, the chaos of arbitrary imagery, basics of grammatically sequence signals?
It's oxymoronic. Our communication was developed in highly developed hierarchies for a reason: continual deception, deviance, anarchism, perversion, subversion always operating in conflict and in contrary to hierarchies.
Language is not self-organizing, signaling is not self-learning it self-regulating. The web opened the already existing pandora's box of Shannon's admittedly non-psychologically relevant info theory and went bust at scale.
Yeah kind of agree. Decentralised protocols are forced to expose a lot of data which can normally be kept private like users own likes.
Dunno necessarily if they are _forced_ to expose that data.
Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.
Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.
Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).
In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.
If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.
In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.
If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.
It does if you don't pay to access the service as well, because the financial system is the underpinning of their ad network.
Even in a federated system, you can be blacklisted although it does take more coordination and work.
i2p and writing to the blockchain are an attempt to deal with that through permanence, but those are not without their own (serious) problems.
>I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.
"Startup engineer" believes the solution to decentralization is a startup, what a shock. We look forward to your launch.
I'm a consultant that builds for startups. I'm not an entrepreneur myself.
If I were to build something like this, I'd use a services non-profit model.
Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.
I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
>Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.
I disagree that federation is "specifically political decentralization" but how so?
You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.
There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.
This seems like a better solution than everyone being subject to the whims of a centralized service.
To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.
| if you don't like the way Zuckerberg and Musk run things, too bad
It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.
You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.
Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.
I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.
However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.
> Ultimately, big problems with an open social web include:
These two seem like the same problem:
> moderation
> spam
You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.
Now you don't see trash unless you follow people who like trash, and then whose fault is that?
> which now includes scrapers bringing your site to a crawl
This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.
> good faith verification
> posting transparency
It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.
> [...] show people posts from the people they follow or that the people they follow liked.
Yes, this is a good system. It'll work particularly well at filtering spam because people largely agree what it is. One thing that will happen with your system is people will separate into cliques. But that's not the end of the world. Has anyone implemented Anthony's idea of using followees' likes to rank posts?
>You need some way of distinguishing high quality from low quality posts.
Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.
>This is a completely independent problem from spam.
Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.
>It's not clear what these are but they sound like kind of the same thing again
I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)
- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.
It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.
- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see
Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.
That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.
We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
> Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.
> The latter poisons a site before it can really start to curate to its users.
That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.
> ensure you know when a human or a bot is posting.
This is not possible and you should not attempt to do things that are known not to be possible.
It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.
What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.
> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.
This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.
> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.
If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.
> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.
> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.
No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.
What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
>This is effectively the same problem.
Maybe when you get to the scale of Reddit it becomes the same problem. But a fledgling community is more likely to be dealing with dozens of real posts and hundreds of posts of spam. Even then, the solutions differ from the problem spaces, so I'm not so certain.
You can't easily automate a search for "quality", so most popular platforms focus on a mix of engagement and similarities to create a faux quality rating. Spam filtering and removal can be fairly automatic and accurate, as long as there's ways to appeal false negatives (though these days, they may not even care about that).
>This [ ensure you know when a human or a bot is posting.] is not possible and you should not attempt to do things that are known not to be possible.
Like all engineering, I'm not expecting perfection. I'm expecting a good effort at it. Is there anything stopping me from hooking an LLM to my HN account and have it reply to all my comments? No. But I'm sure if I took a naive approach to it that moderation would take note and take action on this account.
my proposal is two fold:
1. have dedicated account types for authorized bots to identify tools and other supportive functions that a community may want performed. They can even have different privileges like being unable to be voted on (or to vote).
2. action taken on very blatant attempts to bot a human account (The threshold being even more blatant than my above example). If account creation isn't free nor easy, a simple suspension or ban can be all that's neede d to cub such behavior.
There will still be abuse, but the kinds of abuse that have caused major controversies over the years are not exactly subtle masterminds. There was simply no incentive to take action once people reported them.
>This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does
Probably not. That kind of verification is more domain specific and that's an extreme example. Something trying to be Blind and focus on industry professionals might want to do verification, but probably not some casual tech forum.
It was ultimately an example of what transparency suggests here and how it differs from verification. This is another "good enough" example where I'm not expecting every post to be fact checked. We just simply shouldn't allow blatantly false users or content to go about unmoderated.
>What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
Yes. This is why I don't expect such a solution to be solved by corporations. Outside of the brief flirting with Meta, it's not like any of the biggest players in the game have shown much interest in any of the topics talked about here nor in the article.
But the tools and people needed to make such an initiative doesn't need millions in startup funding. I'm not even certain such a community can be scalable, financially speaking. But communities aren't necessarily formed, ran, and maintained for purely financial reasons. Sometimes you just want to open your own bar and enjoy the people that come in; only caring about enough funds to keep the business running, not attempting to franchise it through the country.
>We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
If you want a platform that doesn't remove anything except the outright illegal, I don't think we can really beat 4chan. Nor is anyone trying to beat 4chan (maybe Voat still is, but I haven't look there in years). I think it has that sector of community on lock.
But that aside: any modern community needs to be very opinionated on what it allows and doesn't upfront, in my eyes. Do you want to allow adult content and accept that over half your community's content will be porn? Do you want to take a hard line between adult and non-adult sub-communities? Do you want to minimize flame wars or not tend to comments at all (that aren't breaking the site)? Should sub-communities even be a thing or should all topics of all styles be thrown into a central feed and users get to opt in/out of certain tags? Is it fine for comments to mix in non-sequiturs in certain topics (e.g. Politics in an otherwise non-political post?). These all need to be addressed not necessarily on day one, but well before critical mass is achieved. See Onlyfans as a modern example of that result.
It's not about capital-C "Censorship" when it comes to being opinionated. It's about establishing norms upfront and fostering around those opinions. Those opinions should be shown upfront before a user makes an account so that they know what to expect, or if they shouldn't bother with this community.
A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.
It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."
If you ban that user, they wouldn't be able to use that ID again with you.
The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.
Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.
The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.
That's awesome. Hopefully the US can get something similar.
Why are none of these a problem with Mastodon then? Some instabces do charge but most don't.
Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.
That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.
But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.
I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.
"- moderation
- spam, which now includes scrapers bringing your site to a crawl
- good faith verification
- posting transparency"
And we have to think about how to hit these targets while:
- respecting individual sovereignty
- respecting privacy
- meeting any other obligations or responsibilities within reason
and of course, it must be EASY and dead simple to use.
It's doable, we've done far more impossible-seeming things just in the last 30 years, so it's just a matter of willpower now.
#1 problem is server hosting
It is interesting how it became a norm to just blindly assume the more decentralized something is the better it is. There isn’t any evidence this is true. Reality isn’t so reducible.
It'd be cool if you had to pay a certain amount of money to publish any message.
And then if you could verify you'd paid it in a completely P2P decentralized fashion.
I'm not a crypto fan, but I'd appreciate a message graph where high signal messages "burned" or "donated money" to be flagged for attention.
I'd also like it if my attention were paid for by those wishing to have it, but that's a separate problem.
it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.
Maybe if you could prove you sent a payment to a charity node and then signed your message in the receipt for verification...
Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.
You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.
I like that idea a lot.
The people willing to pay money to post messages are not a desirable demographic. It is one that includes people like spammers.
"The 19th reports at the intersection of gender, politics, and policy - a much-needed inclusive newsroom..." This isn't a problem with the distribution technology. This is a problem with the message, and its narrow niche.
The site's marketing is geared towards collecting donations in the US$20,000 and up range. That doesn't scale. They don't have viewer counts big enough to make it on payments in the $10/year range. So that doesn't scale either.
The back-end technology of this thing has zero bearing on those problems.
[1] https://19thnews.org/sponsorship/
To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)
I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.
I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.
I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.
> skimmed this article, I still don't get it.
But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.
Of course, it's futile if the user doesn't care about wielding that power.
> But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.
This is true, of course. I'm here interacting with strangers. But, for me, HN is about discovery not community like what the article talks about. I'd be just as content not posting if the ability wasn't there. I just don't agree that social media is that important.
I personally think what the article talks about is already available in the form of group chats on platforms like signal. My impression, from the article, is the author is extremely politically motivated and seems to believe social media is somehow a good thing, as long as the people they don't like can't control it , and likely can't use it? That last point might not be true.
Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.
> The days of people posting their weekend on Facebook are long gone.
All of my friends do this on instagram or snap.
Group chats are lowercase S social media but they still benefit from being open.
By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
> By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?
Social media has two functions: chat (within groups/topics/...) and discovery (of groups/topics/...). So unless we rely only on IRL discovery, we need a way to do discovery online.
Discovery is probably the main problem social media creates. Almost all of these problems solve themselves when you remove discovery. If someone in your friends group chat is spamming porn you just remove them. There's no need for the platform to intervene here, small groups of people can moderate their own friend groups.
Once you start algorithmically shoving content to people you have to start worrying about spam, trolling, politics, copyright, and all kinds of issues. The best discovery system is friends sharing chat invite links to other friends who are interested.
ok, but what if my friends have terrible taste.
Go to events more your taste and find new people to invite you to things.
Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.
Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.
"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.
I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.
This really reads to me like an example of pseudo-profound bullshit, and yet I'm sure you do mean something - could you explain what?
> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?
Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?
> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.
> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.
The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?
--- We all know about Twitter acquirer Elon Musk, who bent the platform to fit his political worldview. But he’s not alone.
Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.
Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.
And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.
Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian. ---
This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.
Spritely is the solution. Been baking for a few years now. Just pushed an update last week, in fact: https://spritely.institute/
While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.
Whatever happened to Diaspora?
The name killed it. If you know what it means, it doesn't bear any relevance to social media. If you don't know what it means, it sounds like a gastric disorder.
Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with "Nazis" just further alienates people.
I read this comment, went back to the article, and then came back to this comment. I have no idea what niche political agenda you're talking about- the message of the article is basically "solve problems your users are actually facing, not problems you think they have".
You can apply the concepts the author talks about to _literally_ any group that would make use of social media.
>solve problems your users are actually facing, not problems you think they have
>You can apply the concepts the author talks about to _literally_ any group
The presentation could have been modified to avoid alienating people if the author had focused on champinioning how open social media allows for the ability to solve these problems.
>I have no idea what niche political agenda you're talking about
Search the page for "Why should anyone care?", and you'll see it. Thos section of the talk he complains that the political situation of America doesn't match his views. Then in the next section, "The capitulation of social media", he complains about how other social media sites don't match his politics. Then in the next section, "The decline of journalism", the talker tries to argue his political opinion that journalists are a good thing. Then in the next section, "The problem is global", he explains that more places than just America don't share his political view.
I'll stop here, but it goes on further, even to the very last sentence. I thought this was supposed to be a technology keynote, but this talker turned it into a place for him to complain about the political situation of the world.
Being against Nazis is not a niche political agenda.
Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.