Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users
Federated networks consist of instances that are voluntarily connected together. The benefit of being connected is that users across the network can interact with one another. On the other hand, above the protocol level there is no universal set of acceptable interactions. Instance administrators can moderate their own users and the content they generate, and they can moderate other instances on their users' behalf.
Content moderation is complicated and time consuming. It is a form of arms race where those who wish to spread their message are pitted against those who wish such messaging would not propagate. An instance can generates high volumes of unwanted content, creating asymmetric load on the moderation abilities of users and administrators across the network. To manage their time, instance operators may terminate connections even though some users on each instance might still obtain value through their interaction.
According to Metcalf's Law, the value of a network grows with the square of the network's size. This means that a single network, such as the one run by Facebook will be much more valuable to users than many separate networks, even if the same number of users were to access each. This is important because users are unlikely to switch to a federated social network that provides less value to them.
As a reference point, Facebook has recently reached $1048.53B. The value for a federated social network is harder to measure. On one hand, improved federation helps migrate users away from commercial environments meaning more time can be spent on substance, entertainment, or human connection. In addition, federated networks enable greater freedom for expression without censorship. Furthermore, users can have direct involvement in the development of features that provide value to them. Though hard to quantify, it's hard to imagine the value of solving this problem being lower than hundreds of billions of dollars globally.
Is this problem described accurately? What about the description above is ambiguous or incorrect? How can we determine the number of users that are connected to the largest federation?
Hello. This is the first issue in this problem-focused repository I'm calling The Fediverse. This was inspired by Pieter Hintjens/ZeroMQ's C4 development process wherein all that's logged in an issue tracker is a clear problem description. For more information about this repo and the process we're following see the README.
I'll add my own 2 cts to this and some links to bring related fragments together ;)
First of all the often-heard argument to be on the big traditional social media like FB and Twitter are network effects. "My friends are over there", which creates FOMO. On the other hand probably a lot of the toxicity of these platforms is caused by the huge scale too, and algorithms that expose you to content with the sole purpose to drive engagement (read: maximise time online). This does not have the user's interest at heart, and brings you into contact with mostly anonymous people to engage in flame wars, outrage posts and all kinds of shallow interactions. While addictive the value it provides is very dubious.
Besides Metcalf another important metric is Dunbar's number, which relates to the number of stable social relationships one can maintain. Here I think the Fediverse can truly shine. I feel that the concept of "Community" is crucial, and - related - the interconnection of diverse community. I call this the Community has no Boundary paradigm. The concept of Community can be federated in a better way than is currently the case, where federated instances are implicit communities.
Now on the topic of Moderation. There's controversy around it, especially on the (slightly anarchistic) Fediverse where it is easily seen as cencorship / restriction of freedom. Moderation is distrusted. The individual fedizen should be empowered to do as much of it themself as possible. Yet I think on top of that some level of admin / mod activity are requirements to keep an instance / community healthy and thriving.
It is also complicated and time consuming, as you say. I feel that part of the problem is that moderation happens mostly out of sight of the regular fedizen. The efforts of admins working selflessly to provide good service are under the radar. Joining a good instance provided as a free resource is seen as a given, without much thought of the work involved. It leads to admin work being underappreciated.
I started a brainstorm on Lemmy and SocialHub on making Moderation more of a first-class citizen of the Fediverse. There's two ideas in that: Federated Moderation and Delegated Moderation. You can read more on Lemmy (which has a link to related SocialHub topic): https://lemmy.ml/post/60475
You raise many important issues that concern The Fediverse but I want to focus so that each issue is discussed with as small a scope as possible. From your post, we can create five more and I would love nothing more than to see others create tightly scoped issues. It is my hope that being laser-focused on each issue we can achieve new understanding and determine if each is valuable to solve on its own.
To the issue at hand, these are what I see as directly relevant to the problem statement "Network-level moderation of content on federated networks leads to fragmentation and lower total value for users":
Your comments about network effects and toxicity I would summarize as "larger networks are not positive for users, so net negative in value." This depends on how we can measure value. Certainly market cap isn't appropriate for The Fediverse. I only mention it as a reference point because I have nothing better. Perhaps value can be measured through surveying people on how they feel as they use The Fediverse and aggregate that in some way. Being able to measure user value is certainly essential to evaluating this problem statement. In some cases, technology has value even for non-users so "for users" could be dropped.
With The Community has no Boundary, you bring up an interesting point in that social network topology matters. Some folks clearly do not want to be even connected to the same network as some others and in that sense, they get greater value from deliberate fragmentations.
I'm wondering if I should add above that there are alternatives to network-level moderation. For example, relays could pass information that they don't store or display to their own users. Then moderation would place at the layer between network and consumption on a local instance. I hesitated to add that since it seems to be advocating for a solution, when what I want to do here is prove out problems.
Some excellent discussion here, which raises many issues OT to this problem statement, but the relevant points made are:
- Against: Fragmentation may provide higher overall value, depending on accuracy of the scope of fragmentation, if value metrics can be obtained from individual users. source
- Pro: macgirvin observes that "We already have a split and fragmented network due to the rampant site blocks." source
Next, I'll be updating the problem statement and will commit that as a markdown document against which others may create pull requests. This leads us into experimental territory as far as how C4 might work on a text. C4 would suggest to merge any pull request so that it becomes part of the record and "the market" (anyone may fork the repo, or take the text and produce derivatives) may then run with the version of the text that suits them. Maintainers here may also revert merged PRs that break the build... i.e. don't make sense in English.
I'm also going to use the PR hack, promoting pull requesters to maintainers until we get a few.
Deleting a branch is permanent. It CANNOT be undone. Continue?