Opinion: Autocorrect works in texting, not fact-checking

By Larry Persily

Close to 20 years ago, when I was working for the Anchorage Daily News, the paper was moving more aggressively into the online world, allowing readers to post comments at the end of news stories, opinion columns and letters to the editor for everyone to see.

I thought that was a bad idea, opening up a free platform for people to spread and promote mistruths, half-truths and full-out falsehoods to tens of thousands of readers every day. Not to mention personal attacks on innocent people, accusations and hostile language.

The problems would overwhelm the benefits of a community forum of ideas.

The editor said not to worry, other readers would respond and post their own comments, tagging any errors and redirecting humanity to a better path. Sort of like an autocorrect feature.

Only it didn’t work out that way. The online comments worked like flypaper and attracted a lot of errors, personal attacks and accusations. The newspaper eventually decided the self-correcting plan was self-defeating and shut down the online comments.

Though some newspapers still allow their websites to provide an unedited, unrestrained bullhorn for reader comments by everyone about most anything, far more papers have either dropped the free “service” or spend extra money for staff to moderate online discussions.

Of course, the more profitable answer is to not spend company money on anything other than depositing subscriber payments. Money talks, and money talks louder when there are no volume controls.

That’s the route Meta took last week. Meta, which owns Facebook, Instagram, Threads and WhatsApp, is the 21st century equivalent of phone lines, telegrams, the mail, street corner handbills, grocery store bulletin boards, highway billboards, bullhorns, political rallies, carrier pigeons and skywriting all rolled into one dominant force in the world.

Meta Chief Executive Officer Mark Zuckerberg announced Jan. 9 that the company will dismantle its fact-checking program, shutting down its obviously imperfect and maybe unworkable system intended to limit the spread of falsehoods on its platforms. Some called the fact-checking censorship, while others called it inadequate and ineffective. No matter what your complaint, it’s going away.

Instead of trying to maintain order in the online playground, Meta has thrown up its hands and will rely on users to add context or refute claims in notes that will appear next to specific posts. It’s the self-correcting approach.

“We’ve reached a point where it’s just too many mistakes and too much censorship,” Zuckerberg said. “The recent elections also feel like a cultural tipping point toward once again prioritizing speech. So we are going to get back to our roots, focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.”

As part of simplifying its policies, Meta said it will lift restrictions on hot-button topics on its services and claims it will focus its enforcement efforts on the worst or illegal postings. Or so it claims, though many police and parents will tell you that Meta hasn’t earned good grades so far in protecting children from exploitation.

Meta is following the lead of the Pied Piper of Everything Goes, Elon Musk, who owns the social media platform X and also seems to own President-elect Donald Trump’s ears, or at least one of them.

Rather than trying to fix the very real problem of spreading misinformation, Musk and Zuckerberg are declaring it’s “not my responsibility.” That is irresponsible.

Larry Persily is the publisher of the Wrangell Sentinel, which first published this article.

Tags: ,