Fixing Facebook Step 1

If you were Mark Zuckerberg and you wanted to get ahead of potential government regulation of your business, what would you fix first? Would you try to curtail social media addiction, protect privacy, reduce mental health risks, reduce weaponized information, eliminate hate speech, flag misinformation, deal with data governance, or address some other seemingly intractable problem? Here’s what I’d fix first.

Unsplash

Identity vs. Anonymity

Facebook (FB) stands apart from Twitter (TWTR) and many other social media platforms because, in theory, you know who everyone is on Facebook. You see a profile picture and a first and last name, and there is some tacit understanding that Facebook has terms and conditions that ensure people are who they say they are. This is in stark contrast to Twitter, where you are as likely to meet a bot as you are to meet a human being.

But in practice, Facebook does not go far enough to guarantee that Facebook profiles are the people they say they are. And when you link to a piece of propaganda, Facebook does not force you to check the veracity of the content you are amplifying.

This is a non-trivial issue. Imagine how much harder it would be to post misinformation on Facebook if you were forced to stand behind and be responsible for the veracity of your claims. Abusers of the terms and conditions would be warned, then removed – never to return – because their identity would be known to Facebook.

Taken a step further, posting misinformation would subject the offending user to slander or liable claims in a court of law. Lawsuits could easily be filed against someone whose identity is known. A mechanism like this would not stop all misinformation from spreading, nor would it change the minds of the “true believers.” However, it would dramatically reduce the amplification of fake news.

How It Might Work

Imagine having to provide two forms of government-issued photo identification as well as a credit card when signing up for a Facebook account. This is a crazy amount of friction between consumers and the platform, but it would instantly separate people from bots. If Facebook added a $1.99 per month charge, it would practically end large-scale non-human traffic on the platform.

Now imagine a verified user badge, an unverified user badge, and a user preference that would prohibit unverified users’ posts from showing up on your feed.

Lastly, add a checkbox to every link and upload dialog box requiring the user to agree that they are responsible for the content they are about to post (or share) and that they understand the consequences of violating Facebook’s terms and conditions.

What Might Happen

First and foremost, Facebook would lose a lot of users. Maybe a billion people would not want to subject themselves to this kind of required identity and proof of life. So be it. If you won’t take responsibility for your posts, you should not be allowed to post. It’s that simple.

This Does Nothing to Change the Algorithm

Importantly, this quick fix does nothing to solve the greater problem with Facebook and all social media. You will still be teaching Facebook’s AI about what you like and what keeps you most engaged. You would still see all the content posted in your personal echo chamber. The difference would be that you’d know exactly who was responsible for it. My guess is that people would be more careful about posting or sharing content that they were going to be responsible for.

Shouldn’t Facebook Be Responsible for Content?

Should Facebook be responsible for the content people post? It’s an interesting question. This may be technologically possible one day. But today, technology has not reached a point where we could reasonably expect an AI system to filter out all of the harmful content that is posted. It is far easier to shift the responsibility to the people posting than to rely on an algorithm to do the work.

What about All the Other Problems with Social Media?

The problems caused by social media include, but are in no way limited to, social media addiction, invasion of privacy, mental health risks, weaponized information, hate speech, misinformation, and generally poor data governance. My identity fix would only impact weaponized information, misinformation, and hate speech and would tangentially help with some of the other issues. Admittedly, it’s a quick fix. But it would be a bold shift of responsibility from the platform to the user – which is absolutely required if we are going to make meaningful progress toward fixing social media.

Are there fixes for Facebragging and the mental health issues Facebragging causes? (Classic Facebrags: “Wow! The beach here in Ibiza is so crowded today!” or when a professional swimsuit model innocently asks, “Does this suit make me look fat?”) Yes, but they are far more complicated, as Facebragging and the anxiety it causes is more about sociology than technology.

Approximately 75 percent of all internet users are on social media. And while social media has the unique ability to exponentially amplify evil, it is not the root cause of the evil it amplifies – it is simply a reflection of its users.

The good news is that you can start to fix the most complicated problems on every social media platform right now. Be kinder, post positive messages, and comport yourself in a civil way even when you disagree with someone’s point of view. Until we can figure out how to regulate social media, random acts of kindness may be the best fix of all.

Disclosure: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.