Social networks are usually run by single central organizations. When you visit Facebook, for example, your computer talks to servers located in giant datacenters owned by a company called “Facebook, Inc.”. In addition to running all the servers and networking equipment, Facebook, Inc. also does things like control who’s allowed to use the website, pick trending topics, and tweak the algorithm that decides what shows up in your feed.
Lately, though, there’s been some interest in decentralized social networks. These systems let lots of different people and groups run parts of the service. (Often, they’re set up so that anyone can join in just by running a special program on their computer.) I’ve been seeing projects like Mastodon, PeerTube, and Scuttlebutt get released and slowly gain in popularity over the past few years. If you want to stretch the definition of “social network”, there’s also IPFS, Dat, and Urbit.
These kinds of projects are exciting to me because they could solve a lot of the problems with social networks. It’s a lot harder for decentralized networks to go down, or make user-hostile changes, or submit to pressure from authoritarian regimes. They’re also kind of cool from a technical perspective. Despite all this, I haven’t found a project that I personally want to use or recommend to people yet—every decentralized social network I’ve looked at is missing something or other.
I think I’ve managed to boil down the things I want to see in a decentralized network into three key features:
When your computer talks to a website, that website receives your IP address. There’s a very good technical reason for this: if you ask the website to send you some information, the website needs to know where to send it. (A bit like a return address on a letter.) But IP addresses can also reveal information about you. They usually indicate roughly where you live and who your Internet Service Provider is, and they can be used to connect multiple online accounts/posts/views back to the same person. Hackers can also use your IP address to send things like DDOS attacks your way. On centralized networks, though, this isn’t the biggest problem in the world—Facebook already has lots of information about you, and they probably aren’t going to DDOS their own users.
But in some decentralized social networks, random other users of the network get your IP address and can link it to your activity. They can build websites like iknowwhatyoudownload.com, which lets you look up anyone’s BitTorrent activity if you know their IP address. These random people aren’t even bound by the flimsy privacy policies and PR considerations that Facebook is—they can do whatever they want with the information.
Luckily, there are techniques that decentralized networks can use to avoid this. I’m not an expert on the subject, but systems like I2P or Tor seem to work by sending data through multiple other nodes in the network before it reaches its final destination. You can use encryption to make sure none of the intermediate nodes can see the data, and you can make sure that the destination only knows who the last intermediate node is and not who the original sender is.
This doesn’t mean that the actual posts or users of the service have to be anonymous—you can definitely implement things like user profiles. It just means not leaking users’ IP addresses.
Hosting data on the Internet isn’t free.
You have to buy hard drives, and you have to pay for electricity and Internet access.
Usually, whoever owns the website pays to host it; I spend a few dollars each month from my salary, and Facebook spends tens of millions of dollars that they get from
secret backroom deals with the Illuminati advertising.
But the whole point of decentralization is that no single person or company owns the website!
So who pays for it?
Some decentralized networks try to solve this problem with cryptocurrency schemes, but this means that anyone who wants to try out the network has to put their credit card information into a sketchy website and transfer money to a sketchy offshore bank account and do other sketchy cryptocurrency stuff. (Also, cryptocurrencies are usually outright scams.) Other decentralized networks have “instances” which one person pays for and a bunch of other people can use. But instances have a single point of failure: if the person who runs the instance decides to shut it down, all of the accounts and posts and stuff on that instance are gone forever.
The solution I like the best is to let anyone host anything they want. If Bob really likes my posts and want to help keep them online, he can set up his own little server and copy all my posts to it. Then, when someone wants to view my posts, their computer will search for anyone who has them and ask for a copy. Could be me, could be Bob, could be the Internet Archive. Their computer downloads the posts and uses cryptographic hashes/signatures to make sure no one tampered with them.
This kind of “grassroots” system has some huge advantages. Posts that are more popular will get hosted more; if lots of people want to see something, there’ll be lots of network bandwidth available to show it to them. And all it takes is one person who cares to keep something alive on the network, even if the person who originally created it isn’t around anymore.
Social networks need some kind of moderation to make sure your feed isn’t filled with spam, or viruses, or other harmful content. Companies like Facebook handle this in two ways: by developing machine-learning algorithms to try and automatically detect bad stuff, and by hiring huge teams of human moderators to try and detect it manually. But both machine-learning algorithms and people make mistakes sometimes. And even if they don’t, they’re still a single point of failure—if Facebook, Inc. goes out of business, or doesn’t speak your language, or decides it’s not worth allowing memes that upset the Chinese government, you’re kind of outta luck.
It might be tempting to try and solve these issues by going in the complete opposite direction: no moderation whatsoever. But then you’re stuck with the original problem of your feed being filled with harmful content! If a totally unmoderated platform isn’t overrun with spam yet, that just means it isn’t popular enough for spammers to notice it.
Ad blockers are faced with a similar problem, and they’ve found a very good middle-ground solution. If you open uBlock Origin’s settings page, there’s a “Filter lists” tab where you can choose which people’s rules to use. (Each rule is the name of one specific website or advertisement to block.) There are some filter lists that almost everyone uses, like EasyList and EasyPrivacy, and there are some that are more niche or regional, like Frellwit’s Swedish Filter and Oficjalne polskie filtry przeciwko alertom o Adblocku. You can also write your own rules, both to block things that EasyList missed and to un-block things that EasyList blocked by mistake.
Content moderation on a social network could work in a very similar way, with different groups of moderators creating different lists of banned users and deleted posts. You could have lists almost everyone uses that block things like spam, and you could also have lists for more controversial stuff like adult content and criticism of the Chinese government. And sort of like how “shared blocklists” work on Twitter, smaller groups could create their own lists of users and posts to ban.
But if someone really needed to see a post that had been deleted by a moderator—like if they were worried it contained their personal information or threats against them—they could manually override that block. (They’d be able to see it, but it wouldn’t affect anyone else.)
Put together, I think these three features would be really powerful for a decentralized social network.
If you know of a project that offers all three of these, or you’re working on one, please let me know by emailing carter