Nothing can be more central to who we are as people than the choices we make. And we in the marketing and communications industry make our living affecting those choices. Now, we have access to increasingly powerful persuasive technology – new ways to find, analyse, and persuade people.
It is time for a code of ethics for the use of persuasive technology in marketing and communications.
Where does effective marketing end, and unacceptable manipulation begin?
This is a question I’ve been asking myself often lately. I make my living trying to affect the choices people make. Everyone in the marketing and communications industry does. Clients hire us to change the way people choose, act, talk, and think. Often that’s a question of choosing one brand of product over another, but the range of tasks we take on is vast, and we use some pretty advanced technology to do this work.
Concerns about the persuasive power of technology aren’t new. Berdichevsky and Neuenschwander coined the term “persuasive technology” over 20 years ago. But there are a few things we can do now that we couldn’t do back in 1999.
Let’s have a look at some persuasive technology I have access to, as an ordinary member of the marketing and communications industry.
- Mass surveillance: the ability to monitor the activity and interactions of large numbers of people though aggregated social media and mobile data
- Analytics/predictive technology: the ability to segment large audiences into finely defined persuadable groups, through big-data analytical techniques
- Behavioural segmentation and optimization: the ability to fine-tune messaging to persuade specific types or groups of people through the application of behavioural science techniques such as personality mapping
- Micro-targeting: the ability to make messages reach certain types of people at a critical point in time, such as close to a purchase decision, or in a particular emotional state
None of these techniques is intrinsically good or bad. And for our clients, they represent an incredibly valuable set of capabilities. Take them together and we can do some really, really sophisticated campaigns aimed at valuable outcomes. When we deploy these techniques appropriately, we can connect people with opportunities that improve their lives and the lives of those around them, while growing economies. There’s no question that we should stop using them.
But how do we know when we are taking it too far?
When I tell colleagues and clients about everything that we can do, I often get the same question:
Isn’t this all very Cambridge Analytica?
It’s a valid question. Cambridge Analytica famously used advanced persuasive technologies to shift elections, including the Brexit referendum in the UK and Trump’s election in the USA.
So what makes us any different? If Cambridge Analytica had been fighting for the liberal internationalist side – the Remain Campaign or Clinton’s presidential campaign – would what they did have been acceptable? Would Democrats and Remainers be cheering them as liberal heroes?
I certainly hope not. It’s no secret that I’m a liberal internationalist, but even if Cambridge Analytica had been pulling for the side I agreed with, they still did serious wrong. First, they built their segmentation and targeting campaigns on stolen data. Second, they lied in the messaging that they distributed.
And that shows us something else. Regardless of the aim of a given persuasive campaign, there is an ethical way to use persuasive technology – and an unethical way.
It’s time for a code of ethics for the use of persuasive technology in the Marketing and Communications industry.
In the industry and in society at large, more and more attention is being focused on the ethical use of the technology of persuasion. Trust in institutions including the media and large tech companies is eroding.
There are already regulations and codes of ethics for advertising, for PR and for AI in general. However, there is no specific code or principles for the technology of persuasion, or for the use of AI in marketing and communications.
This is why I believe it is time for a code of ethics for the use of AI in marketing. National or international legislation is far behind the curve of change for this topic. It is time for the industry to come together and agree on a set of principles that define where effective marketing ends and unacceptable manipulation begins.
I’m not here to lay this code out alone – that is far beyond my authority or my expertise. This will take a concerted effort from thought leaders in the marketing and communications industry, but also beyond it: in academia, in ethics, in tech companies, in governments and the military.
Nevertheless, this is something that must be done, and it must be done now. Our capabilities and our power are growing day by day – I see the truth of this at work every day.
Nothing can be more central to who we are than the choices we make. Those choices have always been shaped by what we read, see and hear. But until recently what we all read, see and hear couldn’t be fine-tuned with such power and detail. It’s time to take a look at that power and decide how we should – and should not – use it.