AI governance is the new sustainability

Get ready for corporate responsibility for bits, not atoms. AI governance will soon become a critical aspect of responsible corporate citizenship, just as sustainability has. Forward-thinking organisations will have to consider carefully the potential risks and benefits of AI to maintain licence to operate, and navigate the evolving landscape of regulations and societal norms.

Sustainability for bits, not atoms

Think about how sustainability has transformed business and communications. From a fringe concern, the discussion about sustainability has rightfully become central to business strategy. Now the conversation about climate is commonplace in boardrooms. Carbon emissions are related to corporate risks, and the road to net-zero is a question of strategic necessity. 

AI safety will soon go through a similar cultural transformation. As a general-purpose technology, AI is likely to affect our entire economy. Few organisations, if any, will be able to continue functioning while ignoring its effects. 

In the words of Marc Andreesen, AI is about to become “the control layer for the world.” Yuval Noah Harari describes AI as affecting “the operating system of our civilisation.” Adapting such power means accepting the attendant responsibility. Even something as mundane as an AI app that suggests recipes can be dangerous. (In this example, by suggesting recipes that would actually have created chemical weapons . . . !) 

That is why business leaders will soon have to focus on AI governance: it is an equivalent of sustainability for bits, not atoms. A technology this powerful is too dangerous to let develop without some thought and direction. It is time for business leaders everywhere to get serious about how we will govern this technology, both within our organisations and in society at large. 

The false division and the risks of polarisation

To understand this coming cultural transformation, we must understand the dynamics of the discussion around AI.

There are three main perspectives on AI, each with its own values and visions of risk. These are the existential risk perspective, prioritising safety; the AI ethics perspective, prioritising justice; and the accelerationist perspective, prioritising prosperity. 

Often these are presented as rival factions. In the media, AI Doomers seem to dismiss woke social justice commentators; meanwhile both are derided by the venture capitalists desperate to beat China in the race to build superintelligence. 

Dramatic as it is, this is a false trichotomy. Each perspective is valuable. 

Business leaders would do well to learn from all three in the effort to arrive at sensible AI policy.

Existential Risk: AI as a Threat to Humanity

Could AI actually destroy civilization and end the world? 

We can’t rule it out.  

Humans have destroyed many other species, not because we are malevolent, but because we are more intelligent. If we build machines smarter than we are, we had better be sure that they are well governed. The risks of accident or misuse will grow as these machines take on critical roles managing our infrastructure, financial markets, and information systems. It is even possible, in principle, that superintelligent machines, with goals we can barely understand, could destroy us by accident.

There is disagreement here about the likelihood of the risks. But with the risks of widespread damage on the line, this isn’t something that we can simply ignore. Many leading thinkers in the field have come out to support this view. We all have a responsibility to promote safe AI. 

Justice and Social Impact: Mitigating Harm and Inequality

Another perspective focuses on the immediate social harms caused by AI, such as biased decision-making, perpetuating prejudice, and amplifying inequality. 

There are manifold examples of AI systems already causing this type of harm today. AI’s impact on democracy, manipulation through social media, and the creation of deepfakes raise concerns about social disruption and the need to address injustice. 

Implementing AI systems in your organisation means taking the risks of bias and distortion seriously. It means accepting a responsibility to reduce harms wherever we can find them. 

Prosperity and Rapid Innovation: Balancing Benefits and Risks

These two points bring us to a delicate balance. Each highlights real risks. But we must balance them with the prospect of realising the value of innovation. There is another risk, from regulation itself, since it might slow down innovation. 

From this viewpoint the stakes are also very high. Since AI advances tend to build on each other, there is a tremendous potential first-mover advantage. At least one AI lab seems to agree: a leaked pitch deck from Anthropic claims that “companies that train the best 2025/26 models will be too far ahead for anyone to catch up”.

With this in mind, there are reasons that slowing down progress is its own risk. Personally, I believe in liberal, democratic values (as, I imagine, do most of you reading this). But whoever trains the most powerful AI in the next few years could end up having a disproportionate effect on world culture for a very long time. Perhaps centuries. What if the future is built upon AI that is trained to censor public debate and hold up the primacy of a political party? That also seems a risk worth mitigating against.

So what are the lessons for business leaders?

So, is AI a future existential risk? Or a present social justice one? Or is it a huge opportunity that we need to rush to apply?

The answer is . . . yes. It’s all three. And adopting responsible AI policy is the responsibility of every leader, just as it has become the responsibility of every leader to adopt effective sustainability policy for their organisation. 

Sustainability policy for atoms, AI policy for bits. 

The time to start getting involved in crafting AI policy is now. International momentum is gathering for regulation, national legislation will soon appear. More importantly, all of this is part of a sweeping cultural change that is happening incredibly fast. 

Artificial intelligence will undoubtedly transform our lives and work over the course of coming decades. We all have a role to play in ensuring it is a transformation that benefits everyone with safe and just AI that drives prosperity. 

This is a condensed version of the full white paper on the politics of AI governance available here.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.