Therefore moderation/fact checking by the public does not mean it is correct/objective/fair. If a hundred more users here tomorrow rate my posts positively, there will be more than a +1.
The main sources of misinformation do not come from just one user, but from a network of users. This moderation should be evidence of the accuracy of the information, but if this moderation is not done independently, then this moderation also cannot be relied upon. And this is exactly what is happening more and more in X. The ongoing wars are also being waged largely online. The truth is always the first victim. If you look for X why you are right about a particular topic, you will always find “the evidence”. The algorithm will then give you more of that information, making you more convinced that you’re right. With general supervision, from users receiving information through the algorithm, you will not get the right picture immediately.
Today, more and more people get their “news” from social media. I tend to get it from APnews, newsweek, Reuters, factual… trying to find more neutral news and then create my own picture as objectively as possible.
The European Commission has conducted an extensive study on disinformation and its spread on social media. If you want to check the facts…
“Coffee buff. Twitter fanatic. Tv practitioner. Social media advocate. Pop culture ninja.”