Invisible and Unaccountable: How Governments Communicate
UK authorities have been using Facebook and Instagram to target certain communities. Now Meta is releasing the data
Last year, I visited Belgium with my husband, a trip largely revolving around sitting reading in the sun, eating vast quantities of fries with every meal and taking a series of decreasingly flattering photos of one another in front of fountains. Like most people on holiday, our phones were rarely far from view: messaging our friends and family back home, catching up on U.K. news or looking up local bars. Inevitably, this meant that we saw a lot of adverts jostling for our attention in the corners of our online lives — mostly for local tourist attractions or questionable fast-fashion products. But if I’d had the language on my phone set to Arabic, or had searched for recommendations for Syrian restaurants (which, in Brussels, are some of the best places to eat in the city), I’d have had a rather different experience. Instead of a promotion for half-price entry to Rene Magritte’s house, I’d have been bombarded with a series of grim messages accusing me of breaking the law and telling me that I’d be deported or thrown in jail, or that I risked an imminent and violent death if I tried to travel to England illegally, all courtesy of the U.K. Home Office.
I know all this because of my research — I’m an academic studying digital technologies and the roles they play in emerging forms of harm at the University of Edinburgh. Recently, I gained access to a huge amount of audience and targeting data held by Meta, the company that runs Facebook and Instagram, known as its “Ad library” — political adverts, how they were targeted and who saw them. I already knew that authorities in the U.K. were devising highly targeted ads to shape behavior, using the immense amount of personal data Meta stores, from what you read and buy online to your exact physical location. Adopting better communication techniques is no bad thing in itself, but when I began to dig further, ominous campaigns began emerging, which used intimidating tactics to push their policy. To take just one example: The U.K. Home Office has been using Facebook and Instagram to target vulnerable refugees with threatening fear-based messages.
Social media platforms have woven sophisticated surveillance and influence technologies directly into the fabric of our day-to-day lives. This is nothing new, as we all know from the ubiquitous ads that most of us have learned to tune out. But what has previously been hidden from sight is that this same technology is now being used by governments, giving them powers they didn’t have before. The U.K. authorities are using fine-grained profiles of people inside and outside the country to reach directly into people’s lives. This can be justifiable, even innocuous — encouraging the population to vote, go for medical checkups and other behavioral nudges that are in the personal and public good — but without transparency we can have no understanding about unintended consequences, up to and including potential harms inflicted on those being targeted.
On top of this risk, there are more fundamental questions about privacy to consider, vital to any functioning democracy. Does the government have a right to use these influence techniques on its citizens without their knowledge, in domains that often cross into matters of politics — not public health? Are there any protections in place, or was the U.K. government hoping that this would remain behind closed doors? Meta’s decision to make all this information available to researchers is forcing all these questions out into the open, but only if you have the skills to navigate the data. Our research is blowing the lid off a series of covertly targeted campaigns that show the government beginning to flex its muscles and use these new capacities for potentially more dangerous kinds of influence: fear-based campaigns targeted at vulnerable groups.
Together with researchers at the universities of Strathclyde, Napier, Edinburgh and Cambridge, I’ve been studying the rise of this new approach to advertising in the U.K. Our team calls this new phenomenon “influence government” — or, in its law enforcement form, “influence policing” — to describe how the state has begun using the marketing tools of the social media platforms in the service of public policy. Until now, we haven’t known how these adverts work “under the hood.” Our wider research shows that this was all being driven by a controversial approach to government policy that has become embedded in the U.K. over the past 15 years, based on a behavioral theory known as nudge.
Nudge is an idea which might have been tailor-made for the U.K. government. Drawing from behavioral psychology, nudge uses a toolkit of subtle tricks to push people toward particular behaviors. Many of these have been used extensively in the private sector to drive purchasing. A trivial example is that people are more likely to buy a product costing $9.98 than one priced at $10.00, as the additional digit inflates the perceived cost. Similar innovations have been seen in the design of buildings and public spaces: Airports now force you to walk through duty free before you get to the departure lounge.
In the late 2000s, a pair of academics, Richard Thaler and Cass Sunstein, took these approaches and synthesized them into a framework for shaping society. Often described as “libertarian paternalism,” their work was a major influence on former Prime Minister David Cameron’s coalition government in the U.K. It revolved around the idea that, in a time of deepening crisis, the state had to step in to guide society, but that the public would never accept being directly told what to do from above. Instead of telling the public to change their behavior, policymakers would subtly shape the environment in which they made decisions, making small changes to pricing, the architecture of buildings and the design of public services, in order to make people feel like they were choosing to change their behavior themselves. A classic example draws on the idea that most people don’t deviate from the “default” option unless they have strong views. Thus, making organ donation opt-out rather than opt-in drastically increases the number of people donating their organs, while retaining the element of personal choice.
This was all combined with a change in how the government communicates. Instead of simple awareness campaigns telling the public to stop smoking or pick up their litter, more “strategic” adverts incorporate contextual analysis of the cultures and perceptions of different groups to craft narratives more likely to be noticed. Some of this involves detailed research into cultural narratives within particular groups that underpin the behaviors the government is trying to change. Real examples include allowing the National Health Service (NHS) to pinpoint specific misconceptions or worries around blood donation held by minority ethnic communities and counter them directly in the ad content. Others involve a more “hypodermic” model — using the intimacy of social media platforms to deliver the ad “in the moment” as someone does something undesirable. For example, the police might target someone with a stern warning and a picture of a police officer when they search for illegal content online.
Rather than descend into conspiracy theory about secretive government propaganda, it’s important to realize much of this boils down to the kind of innocuous “stop smoking” messages on cigarette packets, classic marketing techniques and minor improvements to public services that most of us would expect our government to be working on. In the corporate world, much of this would simply be considered good marketing practice. But as these campaigns have endured and evolved in the post-Cameron years, they have become increasingly widespread and high-tech in the U.K., and a number of issues have become dangerously apparent. Nowhere are these issues more prominent than in the realms of borders and national security.
There is a deepening fracture in U.K. politics around refugees and asylum, with policy recently focusing on stopping the “small boats” used by vulnerable refugees who, denied safe routes of passage, attempt a difficult crossing over the English Channel from France to the U.K. The Home Office has tried several approaches to reducing these crossings, drawing on nudge theory combined with a simplistic and much-disproven “deterrence” model. The theory is that if the government makes the crossings more dangerous and difficult, treats the people who make it across worse, increases enforcement of both these aspects, and finally ramps up fear-based communications, then refugees will judge that the risks outweigh the benefits and choose not to attempt it.
It might seem absurd to think that a “nudge” in behavior or a targeted advert would deter someone fleeing war and death, someone spending everything they have to reach Calais, leaving behind their lives and families. What is even more surprising is that the government would be able to reach these people at all. It is here — in the targeting itself — that the nudge aspects take on a more frightening role.
Our research has now revealed that these adverts use deeply invasive forms of digital targeting to deliver their messages. They are a high-tech version of former Home Secretary (and later Prime Minister) Theresa May’s infamous “Go Home” vans, when, to create a “hostile environment” for illegal immigrants, the Home Office hired vehicles with the words “Go Home” on them and drove them around areas with high immigrant populations. Instead of physically driving racially charged adverts around the streets, fine-grained digital profiling tools are now being used to target vulnerable groups.
Digital advertising is nothing new; it is a core part of the business model underpinning the social media platforms that govern so much of our lives. Services like Facebook and Google are free precisely because they collect our data — what we watch, what we buy, who we are and our minute-to-minute location in the world — and turn it into complex profiles that advertisers can use to send us messages. The “built environment” of the social media platforms on which we increasingly live our lives is shot through with mechanisms for studying and influencing our behavior. Although this always felt “creepy” to some people, most accept that, for example, Coca-Cola might want to make different adverts to appeal to different sections of the public, a restaurant might want to advertise promotional offers to people physically nearby, and a sneakers brand might want to target people who had recently searched for “buy sneakers” on Google. Things began to seem more ominous with the Cambridge Analytica scandal, when it transpired that political campaigners were using these tools to try to drive voting behavior and sway elections.
But how do you use marketing tools designed to sell shoes and drinks to stop vulnerable refugees from crossing the channel in small boats?
As you might expect, Facebook’s ad platform is designed for commercial use and has no category for “refugees in Calais.” Instead, the Home Office and their contractors (who do research in refugee camps and conflict zones) have built up profiles of the different refugee groups they want to target by combining multiple layers of minute behaviors and interests with location data detected by the platform. Sometimes this is simple: One advert, for example, targets all Arabic-speakers in Brussels. Another targets Vietnamese speakers in Calais who have recently spent time in other European countries. But these categories — created for commercial purposes — can be combined into what we call “patchwork profiles” to target extremely specific groups.
A snippet of what this looks like gives an insight into one of the “patchwork profiles” that have been assembled by the contractors in an attempt to find and deter refugees:
Age: 18-65+
Gender: All
Language: Arabic
Interests: Afghan Premier League, Afghan Star, Afghan Wireless, Afghanistan, Afghanistan national cricket team, Afghanistan national football team, Aleppo, Baghdad, Cinema of Iran, Damascus, Eritrea, Football in Iraq, Homs, Iran, Iran national football team, Iraq, Iraq Football Association, Iraq national football team, Iraqi Kurdistan, Iraqi Premier League, Iraqi cuisine, Kabul, Kurdistan, Lebanon, MTN Syria, Music of Afghanistan, Music of Iran, South Sudan, South Sudan national football team, Sudan, Syria, Syria (region), Syria TV, Syria national football team, Syrian cuisine, Syrianska FC, The Voice of Vietnam, Vietnam national football team, Vietnamese language, mtn afghanistan
Location: TRAVELLING THROUGH: Blankenberge, Nazareth, Comines, Nord-Pas-de-Calais, Dunkirk, Grande-Synthe, Gravelines, Monchy-Breton, Saint-Martin-Boulogne, Picardie, Bourseville, Fontaine-sur-Somme, Saint-Quentin-en-Tourmont
My first reaction to seeing this profile was shock. The behaviors and interests feel deeply personal. They are partly based on people’s Facebook and Instagram activity — liking particular pages or declaring demographic information, life events or location. But much of this is collected automatically through Meta’s extensive infrastructure of cookies and trackers — which detect the things you show an interest in as you travel around the internet. Visit a fan site for the Afghan Premier League, or look up a recipe for Syrian mutabbal, and those interests are added to your targeting profile, despite the fact that these two interests are somewhat incongruous.
This is combined with the list of target locations, which casts a tight digital net around real physical spaces. This net starts in Brussels, then draws routes to the sea through a series of tiny towns on the way to the coast: Nazareth, Comines, Monchy-Breton, Bourseville, Fontaine-sur-Somme. When it reaches the sea, it spreads out in a thin band along the coastline: Calais, Dunkirk, Blankenberge, Grande-Synthe, Gravelines, Saint-Martin-Boulogne, Saint-Quentin-en-Tourmont. If you’re in these places, making your way from Brussels to the coast, your phone’s location sensor will allow the platform to target you directly. The ad platform’s surveillance data is so finely detailed that it even lets you distinguish people who live in these areas from people who are visiting, traveling through, or have just left.
Another set of categories that were clearly created to market products to people on a holiday or business trip have been repurposed to target these vulnerable refugees:
Language: Pashto
Life events: Away from Family, Away from Hometown
Location: Brussels
Language: Arabic
Location: Just left Brussels
Although these systems claim not to put our data in the hands of governments, the levers they have created for shaping our behavior are giving authorities power and specific reach that were previously unattainable. When I dug down further, I found the content of these campaigns — and found it even more alarming. The campaigns are clearly designed to cause fear in the people who see them. The graphics are stark, including military-style drones, roiling waves and terrified people on tiny boats. The text of the adverts relies on frightening and accusatory content to project a kind of “digital border,” declaiming, for example, “If you help drive the boat, we will arrest you as a people smuggler” and “Small boats will be destroyed by big ones in the channel” — in Pashto, Arabic and Vietnamese.
They are clearly based on nudge philosophy, pulling apart every conceivable aspect of the “decision environment” faced by refugees — potential prosecution, the competencies of smugglers, the physical danger of the crossing, the chance of being scammed — and trying to change the perception, and therefore decision, of the viewer.
These are all real concerns facing those who make a channel crossing and some might argue that the Home Office is trying to help them. However, these people are facing danger not primarily because the sea is dangerous or because smugglers are untrustworthy (though these are undoubtedly true). They are in danger because the U.K. government has deliberately shut down all of the possible safe legal routes for them to enter the country — leaving only the most dangerous options. The dangers involved are unlikely to be news to them by the time they reach the French coastline — but the adverts themselves will contribute to the stigma, alienation and fear that they face.
Marketers often argue that “at worst, we’ll have no effect,” but communications — especially when underpinned by the lethal force of the state — can themselves cause harm. This phenomenon, known as “blowback,” reflects the enormous complexity of how communications campaigns are actually received by the public. If your targeting is off, many unintended recipients may be exposed to it, who may interpret it in an entirely different way from the intended audience. Arabic-speaking people who live in, work near or simply visit France and Belgium will receive ads invisible to their friends and neighbors, potentially creating — or increasing — feelings of stigmatization, anxiety or paranoia. And the second order effects of communications — even when they hit the “right” people — can interfere with messages in all kinds of ways. For example, recipients’ feelings of being targeted might diminish trust in authorities that might be able to help them, or play into the hands of people looking to exploit them. Extremist groups have long sought to drive a wedge between their targets and society; ads from governments popping up on social media feeds and suggesting people from your background are not welcome here are grist to their mill.
The Meta Ad Library shows us who actually saw the adverts. The Home Office’s target audiences varied hugely, from tens of thousands of people to only a few hundred. The bulk of the ads landed where you might expect — mostly in Brussels and the Flemish Region of Belgium, with many served directly in Calais. But the “tail” of the data shows a much wider reach, hitting people as far away as Punjab, Mexico and Jordan. This happened when they ran ads targeting Arabic speakers who had just been in Brussels — when they added “away from family” to this profile, this slimmed the target audience down from 500,000 to 5,000. But the fact remains — thousands of Arabic speakers around the world, including many visiting Brussels on holiday or for business, have been targeted by this campaign. As I was looking at this, I realized that I was witnessing a sociological experiment: I could see at a glance where Arabic-speaking people who visited Brussels were traveling for the duration of the ad campaign. This shows how invasive these new forms of advertising can be (and why we should be cautious about the government using them). A researcher looking into the data held by Meta years after it was gathered can track travel patterns of certain demographic groups without anyone knowing. More broadly, this advertising indicates the power that these infrastructures wield not only to influence us, but to give governments a view into our lives that they never had before — seeing how information spreads online and how we react to it in real time.
Looking at these and other campaigns as a sociologist produces a truly strange effect. You can see how the British state sees different groups and how the Meta platform offers up hundreds of tiny aspects of their lives to build bespoke profiles and target them directly. Nor is it only people wanting to build a new life in the U.K. who are being targeted. Large numbers of these kinds of campaigns are aimed within the borders of the U.K. itself, at British citizens. Most of these campaigns are far more innocuous — and less likely to use fear — but the intimate digital targeting still raises real concerns. As government communications become more dependent on nudge theory, they are also becoming far more targeted, based on our online behavior. Although it may be absolutely appropriate for the state to make sure that some parts of the population are getting particular messages, doing so on the basis of extensive surveillance of their online and offline lives is a different matter. There needs to be far more transparency in how government nudge ads are being targeted so that the public can understand what they are seeing and how their data is being used. This transparency itself can be a powerful defense against unintended consequences and blowback — otherwise, the field is left open to conspiracy theories and speculation.
These targeted campaigns showcase the sharp end of the capabilities that the new ad infrastructures have given governments. We miss a trick when we think about the human right to privacy as simply a matter of who has our data — privacy is just as much about who can exert influence on us and how. Although they were created for commercial marketing, the ad systems of social media platforms have handed governments a profound window into our lives: an endlessly configurable menu of characteristics that they can use to target us, and access to the most intimate spaces in which to drop their nudges.
A closer look shows how particular subcultures and minorities have been targeted. Often, these ads were intended to promote specific supportive resources, for example to Muslim communities, such as access to vaccination, public outreach or job support schemes. Some ads are aimed at those showing an interest in Islamic theology, or in particular Muslim celebrities. And “excluded” interests are also added, so if you like far-right figures and media, for example, you will be deselected from certain campaigns.
Until recently, the Meta platform allowed even more precise ways of doing this, automatically detecting “higher, medium or lower” levels of engagement with online content during Ramadan and offering this to advertisers as part of their targeting system. For example, while it is undoubtedly reasonable that the Scottish government should try to increase use of its COVID-19 app among particular groups with lower levels of vaccination, or that London’s Tower Hamlets Council might invite local Muslim residents to a public discussion of hate crime, both of these campaigns used detected levels of online activity during Ramadan as part of this targeting, which involves a whole raft of assumptions about religion and behavior.
The details suggest that targeting used by the government and its contractors is often simultaneously simplistic and invasive. For the Tower Hamlets campaign, the interests that would trigger being targeted (in addition to levels of content engagement during Ramadan) were:
INTERESTS: Adhan, Al-Aqsa Mosque, Arab television drama, Ayah, BBC Arabic, BBC Arabic Television, Dawah Addict, Dua, E-Quran, Eid al-Fitr, Five Pillars of Islam, Hajjah, Hijab, Hijab Europe, Hijab Fashion, Hijab Mode, Hijab Style, Hijab fashion inspiration, Islam Channel, Islamic banking, Islamic dietary laws, Modest Fashion, Muslim Aid, Muslim Hands, Quran Verses, Quran Weekly, Quran reading, Ramadan recipes, Sadaqah, Salat times, Sura, TV Alhijrah, Umrah & Hajj, World Hijab Day, Zakat
But there were further categories that would exclude you — including drinking, gambling and liking far-right figures and media:
EXCLUDED interests: Ann Coulter, Ben Shapiro, Drinking, Fidesz Figyel, Fox Nation, Gambling, Jyllands-Posten, Katie Hopkins, Lars Larson, Laura Ingraham, National Review, Online gambling, Rush Limbaugh, Rush Limbaugh and the EIB Network, The Rush Limbaugh Show, The Sean Hannity Show
The geographical targeting was extremely precise, as might be expected given that the campaign was targeted at Muslims living only in the London borough of Tower Hamlets. If you studied the Quran or engaged with Ramadan content online, the chances of seeing the ad — a fairly innocuous invitation to a public meeting with the police and the local council — went up.
It’s not just Muslims — all sorts of diaspora communities are being similarly stereotyped through the same invasive online observations. Individuals are targeted once they’ve registered interest in a certain culture, perhaps through music or food. In a U.K. government ad campaign to promote support for small businesses, minority ethnic communities in London were defined through the following interests:
Interests: Afro-textured hair, Bangladesh Cricket, Boonaa Mohammed, Eid al-Fitr, Evangelicalism, Glossary of Islam, God in Islam, Hinduism, India national cricket team, Jumia, Muslimah Sholehah, Pakistan national cricket team, Pentecostalism, Popcaan, Ramadan (calendar month), SB.TV, Safaricom, Sizzla, West Indies cricket team, Wizkid (musician), Yasmin Mogahed
The targets of the Home Office adverts — vulnerable refugees — will not see them on a TV screen, in a newspaper or on a billboard. They will see them on Facebook and Instagram — perhaps while they are messaging their families. This will unquestionably cause them further anxiety and stress, yet it is vanishingly unlikely to affect their decision to cross.
While there are some areas in which communications campaigns might be a useful part of government, on their own they can do very little when massive structural forces and obstacles work against them, whether these be the lack of safe and legal routes to the U.K., pervasive inequality and austerity, the legacies of colonialism and the continuing reality of racism, or deep issues of identity and culture. There are also important questions of democracy here — is it the role of government to shape our behavior from above? Is it appropriate for government and law enforcement to target communities based on highly sensitive characteristics “read off” from invasive surveillance data without them knowing?
The choices available to agencies running this kind of campaign leave few unharmful options. Either they target their vulnerable audience in a deeply invasive way, or they go much wider and risk blowback. Yet there is next to no awareness that this is going on, let alone public discussion. When the Home Office placed knife crime ads on boxes of fried chicken in 2019, there was a huge public outcry because the way it was being targeted — stereotyping those who ate fried chicken as knife-carriers — spoke as loudly as the message itself. Yet these digital forms of targeting are far less open to public scrutiny. You shouldn’t need to be a data scientist to find out how the government is spending your money or the mechanisms through which it is trying to change your behavior. At the very least, there needs to be transparency about who is being targeted by campaigns that are paid for by public money, and why. Only then can our societies have a public discussion about what is and isn’t appropriate.
No comments:
Post a Comment