Meta, the owner of Facebook and Instagram, will soon share more data on targeting choices made by advertisers on their platforms.
This will be a part of the Facebook Open Research and Transparency database, which is used by academics to understand how Facebook is used around the world – particularly when it comes to the use of targeted political and social issue ads.
Because of how targeted information can heavily influence millions of users one way or another, Facebook has been criticized for the lack of transparency on the matter.
Facebook is the largest social network in the world by far. They have an astronomical 2.7 billion active users – more than enough to account for substantial samples of any group from almost any country. Keep in mind that Facebook is banned in China, the world’s most populated country (1.4 billion inhabitants).
Naturally, anyone that knows how to manipulate information so that it influences the right groups would have a lot of power. Facebook’s CEO, Mark Zuckerberg, went on record saying that he didn’t believe the platform had that sort of power. Ever since that statement, however, it has become clear that he was being disingenuous.
In 2016, Facebook was accused of meddling in the election by allowing the spread of misinformation under the guise of being “newsworthy” and, what was discovered later, breaching the personal information of over 87 million Facebook users to Cambridge Analytica primarily for use in targeted political advertisements during the Trump campaign.
The use of targeted Facebook ads was pivotal for Trump’s presidential campaign, taking him from the underdog to the eventual winner seemingly against all odds (Hilary Clinton was expected to win in every scenario prior to the campaign period).
Notably, the campaign was ripe with misinformation, conspiracy theories, and “fake news” – a term so common that Trump himself started to use it as a weapon against any attempts at fact-checking.
At the time, Facebook hid under the guise of “free speech” or “newsworthy” and did not interfere.
Only after the election, did Facebook show more interest in transparency and fighting misinformation. They have banned many Far-right groups from the platform (such as the conspiracy-focused QAnon) and in 2018 released a public ad library where users and researchers could see how ads were targeted.
Ever since then, political and social issue articles shared on the platform may have a warning attached to them if they are deemed to be misinformed or incorrect. Articles about mental health issues may contain useful links to related institutions or hotlines.
The ad database service left a bit to be desired in terms of stability and transparency, which is where this new update comes in.
Jeff King, Meta’s vice president of business integrity, states that the goal is that “instead of analyzing how an ad was delivered by Facebook, it's really going and looking at an advertiser strategy for what they were trying to do.” (Source)
In other words, this will allow researchers to focus on what the advertisers themselves were trying to accomplish with the targeting choices they made, and less so on how Facebook’s algorithm handled those choices.
The new data should be available in July on the Facebook Open Research and Transparency website.