Digital marketers have relied on data to provide targeted advertising for many years, but with the rise in artificial intelligence (AI) systems and an increased focus on how businesses are using data, machine-learning and algorithmic bias is becoming a significant issue in the digital marketing community.

Earlier this year Twitter faced criticism for “shadow banning” – a practice whereby users tweets no longer appear in search results and as a result are effectively hidden from other users. This criticism escalated and led to Twitter’s chief executive Jack Dorsey being questioned by US Senators over allegations that Twitter had deliberately censored conservative voices. Mr Dorsey refuted the claim that people had been shadow banned based on their political viewpoints but he did concede that the Twitter algorithm had “unfairly” reduced the visibility of 600,000 accounts, including some members of the US Congress.

More recently, it was reported that Amazon has abandoned its online recruitment tool because its algorithms were discriminating against women. In the marketing world, Facebook was required to update its ad tools to prevent marketers from targeting ads for housing, employment and credit opportunities to include or exclude people of a certain age, gender or race.

The question for marketers is how to ensure their adverts and other marketing initiatives are effectively targeted without being unlawfully or unethically discriminatory. In the UK there are nine characteristics that are protected by the Equality Act 2010 including age, sex, race and religion. Discriminating, either directly or indirectly, based on any of these characteristics is illegal.

The General Data Protection Regulation (GDPR) makes it mandatory for companies to inform people how their data is being used. But a quick glance at the privacy notices of some of the biggest search engines demonstrates that it is impossible for data subjects to fully appreciate the extent of the ways in which their data could be used and the impact which that use could have on those data subjects and on others.

So far no action has been taken in the UK against a business for discrimination as a result of targeted advertising campaigns but this doesn’t mean there won’t be any as people become more aware of how targeted advertising operates and the characteristics which are being used by targeting algorithms.

One of the most common reasons for AI bias is the way in which the machines are trained. If the underlying data they are trained on is bias and the machines are trained to overcome or filter out that bias, the algorithms further reinforce it and compound discrimination. When deploying AI systems, businesses should ensure that the systems are adequately trained, rather than assuming that because machines can’t “think for themselves”, they can’t be just as bias as humans.

There are certainly benefits to using AI for marketing purposes but a core challenge is that many AI deep learning models are “black boxes” which means it is difficult, if not impossible, to identify how individual training data influences the output decisions. Transparent deep learning models which expose the classification process will certainly help to remove bias from AI systems as it will enable people to point out which parts of the classification process may be causing bias allowing developers to address this accordingly. However, the development of transparent models is still in its infancy.

While transparent AI models are not yet commonplace, in the last 12 months some big tech companies have announced tools to help address the problem with AI bias:

IBM: In October 2018, IBM launched its AI OpenScale tools which provide explanations into how AI models are making decisions, and automatically detects and mitigates bias with the intention of producing fair, trusted outcomes. It also attempts to bring confidence to businesses by addressing the challenges involved in adopting AI.

Google: In September 2018, Google’s People + AI Research (PAIR) initiative announced the launch of its “What-If” tool. Built into Google’s open-source web application, TensorBoard, the tool allows users to analyze machine learning models and to make fairness assessments without the need for additional code.

Microsoft: In July 2018, Microsoft released a package for Python developers that implements the black-box approach to fair classification described in the Microsoft Research paper “A Reductions Approach to Fair Classification”. However, this package is only of use to Python developers familiar with deep learning code and Microsoft has not given any indication that it will be implemented as a high-level developer tool.

Facebook: In May 2018 Facebook announced that its internal business is using “Fairness Flow”, an anti-bias software tool which it has developed to automatically warn if an algorithm is making an unfair judgment about a person based on his or her race, gender or age. However, Facebook has not yet publicly released its Fairness Flow tool.

Whilst input data plays a large part in AI bias, there is more to bias than the data themselves. Training datasets need to be constructed carefully to avoid introducing bias and, when writing AI algorithms, they should be written to optimise for fairness, not simply to complete a task.

Kathryn Rogers is a partner at Cripps, with a particular focus on technology and IP related matters.

Kathryn Rogers

Kathryn Rogers

Contributor


Partner, Cripps.