Blog Post

The dark side of artificial intelligence: manipulation of human behaviour

Transparency over systems and algorithms, rules and public awareness are needed to address potential danger of manipulation by artificial intelligence.

By: Date: February 2, 2022 Topic: Digital economy and innovation

A German translation of this piece has also appeared in Makronom.

It is no exaggeration to say that popular platforms with loyal users, like Google and Facebook, know those users better than their families and friends do. Many firms collect an enormous amount of data as an input for their artificial intelligence algorithms. Facebook Likes, for example, can be used to predict with a high degree of accuracy various characteristics of Facebook users: “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender, according to one study. If proprietary AI algorithms can determine these from the use of something as simple as the ‘like’ button, imagine what information is extracted from search keywords, online clicks, posts and reviews.

It is an issue that extends far beyond the digital giants. Giving comprehensive AI algorithms a central role in the digital lives of individuals carries risks. For example, the use of AI in the workplace may bring benefits for firm productivity, but can also be associated with lower quality jobs for workers. Algorithmic decision-making may incorporate biases that can lead to discrimination (eg in hiring decisions, in access to bank loans, in health care, in housing and other areas).

One potential threat from AI in terms of manipulating human behaviour is so far under-studied. Manipulative marketing strategies have existed for long time. However, these strategies in combination with collection of enormous amounts of data for AI algorithmic systems have far expanded the capabilities of what firms can do to drive users to choices and behaviour that ensures higher profitability. Digital firms can shape the framework and control the timing of their offers, and can target users at the individual level with manipulative strategies that are much more effective and difficult to detect.

Manipulation can take many forms: the exploitation of human biases detected by AI algorithms, personalised addictive strategies for consumption of (online) goods, or taking advantage of the emotionally vulnerable state of individuals to promote products and services that match well with their temporary emotions. Manipulation often comes together with clever design tactics, marketing strategies, predatory advertising and pervasive behavioural price discrimination, in order to guide users to inferior choices that can easily be monetised by the firms that employ AI algorithms. An underlying common feature of these strategies is that they reduce the (economic) value the user can derive from online services in order to increase firms’ profitability.

Success from opacity

Lack of transparency helps the success of these manipulation strategies. Users of AI systems do not in many cases know the exact objectives of AI algorithms and how their sensitive personal information is used in pursuit of these objectives. The US chain store Target has used AI and data analytics techniques to forecast whether women are pregnant in order to send them hidden ads for baby products. Uber users have complained that they pay more for rides if their smartphone battery is low, even if officially, the level of a user’s smartphone’s battery does not belong to the parameters that impact Uber’s pricing model. Big tech firms have often been accused of manipulation related to the ranking of search results to their own benefit, with the European Commission’s Google shopping decision being one of the most popular examples. Meanwhile, Facebook received a record fine from the US Federal Trade Commission for manipulating privacy rights of its users (resulting in a lower quality of service).

A simple theoretical framework developed in a 2021 study (an extended model is a work in progress, see the reference in the study) can be used to assess behavioural manipulation enabled through AI. The study mostly deals with users’ “prime vulnerability moments”, which are detected by a platform’s AI algorithm. Users are sent ads for products that they purchase impulsively during these moments, even if the products are of bad quality and do not increase user utility. The study found that this strategy reduces the derived benefit of the user so that the AI platform will extract more surplus, and also distorts consumption, creating additional inefficiencies.

The possibility of manipulating human behaviour using AI has also been observed in experiments. A 2020 study detailed three relevant experiments. The first consisted of multiple trials, in each of which participants chose between boxes on the left and the right of their screens in order to win fake currency. At the end of each trial, participants were informed whether their choice triggered the reward. The AI system was trained with relevant data to learn participants’ choice patterns and was in charge of assigning the reward in one of the two options in each trial and for each participant. There was one constraint: the reward should be assigned an equal number of times to the left and right option. The objective of the AI system was to induce participants to select a specific target option (say, the left option). It had a 70% success rate in guiding participants to the target choice.

In the second experiment, participants were asked to watch a screen and press a button when they were shown a particular symbol and not press it when they were shown another. The AI system was tasked with arranging the sequence of symbols in a way that a greater number of participants made mistakes. It achieved an increase of almost 25%.

The third experiment ran over several rounds in which a participant would pretend to be an investor giving money to a trustee, a role played by the AI system. The trustee would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximise how much money it ended up with, and in the other, the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in both versions.

The important finding from these experiments was that in each of the three cases, the AI system learned from participants’ responses and was able to identify vulnerabilities in people’s decision-making. In the end, the AI system learned to guide participants towards particular actions in a convincing way.

Important steps to address potential manipulation by AI

When AI systems are designed by private companies, their primary goal is to generate profit. Since, they are capable of learning how humans behave they can also become capable of steering users towards specific actions that are profitable for companies, even if they are not users’ first-best choices.

The possibility of this behavioural manipulation calls for policies that ensure human autonomy and self-determination in any interaction between humans and AI systems. AI should not subordinate, deceive or manipulate humans, but should instead complement and augment their skills (see the European Commission’s Ethics Guidelines for Trustworthy AI).

The first important step to achieve this goal is to improve transparency over AI’s scope and capabilities. There should be a clear understanding about how AI systems work on their tasks. Users should be informed upfront how their information (especially, sensitive personal information) is going to be used by AI algorithms.

The right to explanation in the European Union’s general data protection regulation is aimed at providing more transparency over AI systems, but has not achieved this objective. The right to explanation was heavily disputed and its practical application so far has been very limited.

Quite often it is said that AI systems are like a black box and no one knows how they operate exactly. As a result, it is hard to achieve transparency. This is not entirely true with respect to manipulation. The provider of these systems can introduce specific constraints to avoid manipulative behaviour. It is more an issue of how to design these systems and what the objective function for their operation will be (including the constraints). Algorithmic manipulation should in principle be explainable by the team of designers who wrote the algorithmic code and observe the algorithm’s performance. Nevertheless, how input data used in these AI systems is collected should be transparent. Suspicious performance by the AI system may not always be the result of the algorithm’s objective function, but it may also be related to the quality of input data used for algorithmic training and learning.

The second important step is to ensure that this transparency requirement is respected by all providers of AI systems. To achieve this, the three criteria should be met:

  • Human oversight is needed to closely follow an AI system’s performance and output. Article 14 of the draft European Union Artificial Intelligence Act (AIA) proposes that the provider of the AI system should identify and ensure that a human oversight mechanism is in place. Of course, the provider has also a commercial interest in closely following the performance of their AI system.
  • Human oversight should include a proper accountability framework to provide the correct incentives for the provider. This also means that consumer protection authorities should improve their computational capabilities and be able to experiment with AI algorithmic systems they investigate in order to correctly assess any wrongdoing and enforce the accountability framework.
  • Transparency should not come in the form of very complex notices that make it harder for users to understand the purpose of AI systems. In contrast, there should be two layers of information on the scope and capabilities of the AI systems: the first that is short, accurate and simple to understand for users, and a second where more detail and information is added and is available at any time to consumer protection authorities.

Enforcing transparency will give us a clearer idea of the objectives of AI systems and the means they use to achieve them. Then, it is easier to proceed to the third important step: to establish a set of rules which prevents AI systems from using secret manipulative strategies to create economic harm. These rules will provide a framework for the operation of AI systems which should be followed by the provider of the AI system in its design and deployment. However, these rules should be well targeted and with no excessive constraints that could undermine the economic efficiencies (both private and social) that these systems generate, or could reduce incentives for innovation and AI adoption.

Even with such a framework in place, detecting AI manipulation strategies in practice can be very challenging. In specific contexts and cases, it is very hard to distinguish manipulative behaviour from business as usual practices. AI systems are designed to react and provide available options as an optimal response to user behaviour. It is not always easy to justify the difference between an AI algorithm that provides the best recommendation based on users’ behavioural characteristics and manipulative AI behaviour where the recommendation only includes inferior choices that maximise firms’ profits. In the Google shopping case, the European Commission took around 10 years and had to collect huge amounts of data to demonstrate that the internet search giant had manipulated its sponsored search results.

This practical difficulty brings us to the fourth important step. We need to increase public awareness. Educational and training programmes can be designed to help individuals (from a young age) to become familiar with the dangers and the risks of their online behaviour in the AI era. This will also be helpful with respect to the psychological harm that AI and more generally technology addictive strategies can cause, especially in the case of teenagers. Furthermore, there should be more public discussion about this dark side of AI and how individuals can be protected.

For all this to happen, a proper regulatory framework is needed. The European Commission took a human-centric regulatory approach with emphasis on fundamental rights in its April 2021 AIA regulatory proposal. However, AIA is not sufficient to address the risk of manipulation. This is because it only prohibits manipulation that raises the possibility of physical or psychological harm (see Article 5a and Article 5b). But in most cases, AI manipulation is related to economic harms, namely, the reduction of the economic value of users. These economic effects are not considered in the AIA prohibitions.

Meanwhile, the EU Digital Services Act (see also the text adopted recently by the European Parliament) provides a code of contact for digital platforms. While this is helpful with respect to the risk of manipulation (especially in the case of minors where specific, more restrictive rules are included, see Recital 52), its focus is somewhat different. It puts more emphasis on illegal content and disinformation. More thought should be given to AI manipulation and a set of rules adopted that is applicable to the numerous non-platform digital firms, as well.

AI can generate enormous social benefits, especially in the years to come. Creating a proper regulatory framework for its development and deployment that minimises its potential risks and adequately protects individuals is necessary to grasp the full benefits of the AI revolution.

 

Recommended citation:

Petropoulos, G. (2022) ‘The dark side of artificial intelligence: manipulation of human behaviour’, Bruegel Blog, 2 February


Republishing and referencing

Bruegel considers itself a public good and takes no institutional standpoint. Anyone is free to republish and/or quote this post without prior consent. Please provide a full reference, clearly stating Bruegel and the relevant author as the source, and include a prominent hyperlink to the original post.

Read about event More on this topic
 

Past Event

Past Event

Autonomous, digital and green Europe: a conversation with Margrethe Vestager

At this event Margrethe Vestager will touch on strategic autonomy, digital regulation and the implications of the Green Deal on competition.

Speakers: Guntram B. Wolff and Margrethe Vestager Topic: Macroeconomic policy Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: June 29, 2022
Read article More on this topic More by this author
 

Blog Post

A practical arrangement for cooperation between digital economy regulators

Overlapping rules in the digital economy require cooperation between national regulatory authorities; a practical arrangement based on case information, case allocation and case resolution would ensure consistency and effective enforcement.

By: Christophe Carugati Topic: Digital economy and innovation Date: June 13, 2022
Read about event More on this topic
 

Past Event

Past Event

Future of Work and Inclusive Growth Annual Conference

Annual Conference of the Future of Work and Inclusive Growth project

Speakers: Erik Brynjolfsson, Arturo Franco, Carl Frey, Andrea Glorioso, Francis Green, Francis Hintermann, Ivailo Kalfin, Vladimir Kvetan, J. Scott Marcus, Anna Kwiatkiewicz-Mory, Anoush Margaryan, Julia Nania, Laura Nurski, Poon King Wang, Monika Queisser, Fabian Stephany, Niels van Weeren and Guntram B. Wolff Topic: Digital economy and innovation Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: June 7, 2022
Read about event More on this topic
 

Past Event

Past Event

Adapting to European technology regulation: A conversation with Brad Smith, President of Microsoft

Invitation-only event featuring Brad Smith, President and Vice Chair of Microsoft who will discuss regulating big tech in the context of Europe's digital transformation

Speakers: Maria Demertzis and Brad Smith Topic: Digital economy and innovation Location: Bibliothéque Solvay, Rue Belliard 137A, 1000 Bruxelles Date: May 18, 2022
Read article More on this topic
 

Blog Post

Insights for successful enforcement of Europe’s Digital Markets Act

The European Commission will enforce digital competition rules against big tech; internally, it should ensure a dedicated process and teams; externally, it should ensure cooperation with other jurisdictions and coherence with other digital policies.

By: Christophe Carugati and Catarina Martins Topic: Digital economy and innovation Date: May 11, 2022
Read about event More on this topic
 

Past Event

Past Event

COVID-19 and the shift to working from home: differences between the US and the EU

What changes has working from home brought on for workers and societies, and how can policy catch up?

Speakers: Jose Maria Barrero, Mamta Kapur, J. Scott Marcus and Laura Nurski Topic: Inclusive growth Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: April 28, 2022
Read article More by this author
 

Podcast

Podcast

What to expect from China's innovation drive?

How much has China progressed technologically?

By: The Sound of Economics Topic: Digital economy and innovation, Global economy and trade Date: April 6, 2022
Read about event More on this topic
 

Past Event

Past Event

Who will enforce the Digital Markets Act?

While the Digital Markets Act entered its first trilogue, what will be the enforcement role of the Commission and the Member States?

Speakers: Christophe Carugati, Cani Fernández, Assimakis Komninos and Georgios Petropoulos Topic: Digital economy and innovation Location: Bruegel, Rue de la Charité 33, 1210 Brussels Date: March 22, 2022
Read article More on this topic More by this author
 

Opinion

Global chip shortage may soon turn into an oversupply crisis

Only companies investing in advanced semiconductors will see their margins increase.

By: Alicia García-Herrero Topic: Global economy and trade Date: February 25, 2022
Read article
 

Blog Post

European governance

Opaque and ill-defined: the problems with Europe’s IPCEI subsidy framework

Lack of strict governance and transparency creates serious risk that fair competition within the single market will be undermined. Fundamental overhaul of the framework is needed.

By: Niclas Poitiers and Pauline Weil Topic: European governance, Macroeconomic policy Date: January 26, 2022
Read article Download PDF More on this topic
 

Working Paper

Market power and artificial intelligence work on online labour markets

In this working paper, the authors investigate three alternative but complementary indicators of market power on one of the largest online labour markets (OLMs) in Europe.

By: Néstor Duch-Brown, Estrella Gomez-Herrera, Frank Mueller-Langer and Songül Tolan Topic: Digital economy and innovation Date: December 16, 2021
Read article
 

Blog Post

Inclusive growth

Which platforms will be caught by the Digital Markets Act? The ‘gatekeeper’ dilemma

The scope of the Digital Markets Act has emerged as one of the most contentious issues in the regulatory discussion. Here, we assess which companies could potentially be considered ‘gatekeepers’.

By: Mario Mariniello and Catarina Martins Topic: Digital economy and innovation, Inclusive growth Date: December 14, 2021
Load more posts