“Solving” Misinformation

Today I’m thinking about online misinformation. Sure, at some point, you might try to address it systemically — through “fact checking” on platforms and maybe even regulation. But I think these are only superficial fixes that don't address root causes.

Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It's a fact of life, and one you can never totally design or regulate out of existence.

I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

Nothing over decades of the internet's development has changed this fact — not even with high-trust activities that now occur there, like shopping and banking, where you still need to be cautious about who you give your banking information to, for example.

On the Internet, nobody knows you're a dog.

On the internet, nobody knows you're a dog, or if you're talking to one on the other end, or if it's a dog publishing a blog about why you should always give your steaks to them, or if it's a social network run by dogs that lets other members of their pack bark endlessly about how cats are ruining the neighborhood, etc.

As long as human interactions are mediated by a screen (or goggles in the coming “metaverse”), there will be a certain loss of truth, social clues, and context in our interactions — clues that otherwise help us determine “truthiness” of information and trustworthiness of actors. There will also be a constant chance for middlemen to meddle in the medium, for better or worse, especially as we get farther from controlling the infrastructure ourselves.

That’s why this is far from absolving platforms for their role in spreading mis- and disinformation. Many of the design decisions they've made help give information its credibility, e.g. via automated curation and popularity metrics, and they could just as deliberately design their software to make this harder to game or weaponize. For example, on social platforms, it could be as simple as removing the number of “upvotes,” “likes,” and “follows” you see — little numbers on a screen that nonetheless signal social legitimacy, both for accounts (whether dog, robot, or human) and their posts, in the low-fidelity social space that is the internet.

Above all, I think it’s important to consider root causes and worldviews not grounded in tech when it comes to “solving” this problem. More technology and more fact-checking won’t solve it, at least without causing brand new problems. Think about the backlash to social media post warning labels, especially throughout the pandemic and 2020 US election, and the ensuing “censorship-free” platforms promising its users an improved monoculture of people and ideas.

We also have to consider the spreading distrust in our institutions, paired with the current media and political landscape (especially in the US) that promises easy answers to a complex world. The tech industry is yet another demagogue playing old tunes on “human progress” and what is or isn’t “the future.” We should openly question it, and build again from the bottom-up, where real progress always starts.

What do you think? Discuss...

#internet #web #misinformation #disinformation #socialMedia