Ognjen Regoje bio photo

Ognjen Regoje
But you can call me Oggy


I make things that run on the web (mostly).
More ABOUT me and my PROJECTS.

me@ognjen.io LinkedIn

Big tech should help tackle misinformation on smaller platforms

#ethics

Spotify got into some hot water over Joe Rogan’s podcast spreading misinformation. That got me thinking about misinformation on platforms that aren’t in the spotlight, particularly smaller ones.

Facebook and Twitter have garnered a lot of attention fighting misinformation, particularly about the elections and the pandemic.

But, while the focus was, and to a degree still is, on how the big platforms such as Facebook, Twitter, or Reddit handle misinformation, there are probably many smaller platforms that are probably much worse but not as impactful in absolute terms.

There are hundreds, if not thousands, of smaller platforms. Individually, they are probably still a couple of orders of magnitude smaller than one of the big platforms. Combined, however, they are not negligible.

However, since they are all individually small, none have attracted any attention for fighting misinformation.

Does that mean that they don’t? Or that they just fly under the radar?

The case of Small Platform A

The Spotify news reminded me of a problem with Small Platform A.

Small Platform A’s reach is nowhere near Facebook. I couldn’t find an up-to-date number of users it has, only from a few years ago. With a reasonable growth rate, that number is around 20-25 million now. That’s 5% of Facebook’s 1 billion.

I noticed that a relatively well-known alt-right publication kept appearing on Small Platform A. Wikipedia describes it as:

including conspiracy theories and fringe rhetoric associated with the US radical right, the alt-right, and a pro-Russian bias.

The New Republic even suspected it’s a Russian “trojan horse”.

Here are some screenshots of articles that prompted me to contact Small Platform A and ask them to do something.

From October 19, the day after Powell died. This is the first article I decided to contact them about. Both very misleading: first he sold <i>as</i> it tanked, he wasn't front-running. Secondly, Powell had cancer and as such was in the highest risk group. From November 11.

From October 19, the day after Powell died. This is the first article I decided to contact them about. Both very misleading: first he sold as it tanked, he wasn't front-running. Secondly, Powell had cancer and as such was in the highest risk group. From November 11.





Quarantine camps? Get it, like concentration camps?

Quarantine camps? Get it, like concentration camps?


It’s typical alt-right propaganda. The site manipulates whatever news is current to push its agenda.

The question now is: “Should they do something?”

This is the only response I received from Small Platform A (paraphrased):

That cannot be modified now but we’ll see if we can do something about that source.

I sent about ten screenshots to Small Platform A and didn’t get any other response.

I highly doubt that it’s impossible to add a black-list and exclude questionable domains.

What they probably meant is:

Not enough of our users have complained yet, so we dgaf 🤷

From a business perspective, that’s not an unreasonable response, to be honest. The feature might not have enough usage and might not have generated enough outrage to justify investing in a moderation system.

But, at the same time, I think they do have an ethical obligation.

But is it an “undue burden” for small platforms?

It might be.

For Small Platform A, this content is probably generated automatically. In the example above, implementing a black-list would solve the problem. However, selectively filtering articles from many sites would be much more difficult.

But for platforms with user-generated content, it might be entirely impossible.

Many are in that no-mans-land of having too many users for the staff available.

Big tech to the rescue

One of the big platforms could create an API that returns a list of potential issues. It doesn’t have to be a full-fledged review system.

So, if you were to pass it the URL of one of the articles above, it could return something like:

{
  "potential_issues": [
    "political misinformation",
    "covid misinformation",
    "anti-vax sentiment"
  ]
}

They probably also have language models, so they could allow users to submit text as well.

This API would allow smaller platforms to, at the very least, display a corresponding warning with almost no cost.

It makes sense to me

Firstly, big tech already has the tools to do this. They must have built them for internal use. When they review articles on the web, they must store the results. They already have the language models. They already spent the bulk of the effort. *

Secondly, these tools don’t seem usable as a USP or particularly monetizable. Imagine the furor if Facebook started marketing itself as the “most reliable” or the “safest”.

On the other hand, they could probably use the data that API consumers submit. Just like Google does with reCaptcha.

And much more importantly, the platform that does this would buy itself a bucket-load of goodwill. It would make it possible for smaller platforms to get ahead of the problem. And everyone’s users would be better off. Win-win-win.

Discussion

From Billy Easley II on Twitter:

Big Tech crafting an API that helps smaller platforms identify misinfo might be helpful but exacerbates another problem: fear that companies are deciding what’s truthful and what’s not

Smaller platforms using, say, FB’s API to identify anti-vaccine sentiment or political misinfo might be good for the companies.

But would the public like that state of affairs when trust in FB and Big Tech is so low?

I don’t mean to sound so skeptical. It’s true that smaller platforms don’t have the capacity to deal with misinfo and this is a solid, interesting attempt to deal with that…

…I think I might be fundamentally hostile to centralized authorities - whether public or private - making too many determinations of what’s truthful?

Still, this could be a useful tool

billyez2 on twitter

Those are great points, particularly re: mistrust.

Perhaps transferring the tech to an independent non-profit might work.

And from James Czerniawski

Smaller co’s may choose to employ those tools, but I think there’s more nuance here. At a minimum, it’s rich to see ppl complaining about these cos being the arbiter of truth yet expecting them to dev tools that would make them precisely that.

I think if done correctly these tools would help increase the public’s trust in these companies.