We do not intend to enforce any sort of identity verification or unique identities in distributed[C]. We do not think encouraging people to doxx themselves is a good idea.
This goes beyond concerns about privacy, though. We believe that having multiple personas, which you can use depending on the context you are in, is healthy.
This raises concerns regarding disinformation. If the platform is uncensorable, and we do not plan to enforce identity, how will this not become a cesspool of fake news?
Currently there are two camps that care about uncensorability, but for different reasons.
The first thinks that Facebook shouldn’t be able to decide what you see or don’t see, but they do want to stop misinformation from getting to masses (they don’t see that as “censorship”).
The second thinks that them specifically shouldn’t be censored. At this point this is comprised largely of racist zealots and people who are being canceled elsewhere, and want to still be able to reach their audience.
Something truly uncensorable at mass scale would horrify the first, while being attractive to the second.
I think that one way around this are system limitations. Misinformation is not new - governments have used propaganda for ages, and your crazy uncle was spouting theories at dinner before Facebook existed. What is new is misinformation at the scale, speed, and reach that we have now. We are monkeys, and our brains are incapable of processing the degrees of separation between whomever produced a piece of content and the seeming immediacy with which it appears on our feed. They take it as true because Joe posted it, and they like Joe; but if they saw that Joe is posting it because Jane did, and Jane did because Jack forwarded it, and they consider Jack an idiot, maybe they’d think differently.
What we need to do is artificially enforce a small world approach. You are uncensorable, but the system limits the circle of people you follow - let’s say it’s 100. This has several positive effects:
Published: 2021-09-14