Does risk management hold the key to AI Adoption? (Part 1/2)

Mike G Robinson Avatar
Does risk management hold the key to AI Adoption? (Part 1/2)

There is a growing consensus that risk management is slowing AI adoption. A closer look suggests otherwise.

Risk frameworks don’t just make AI safe. They help to anchor it in the process layer – where it can be controlled and scaled.

Firstly, to give credit where it’s due, the premise for this article came about through Tony Jones article The Board Problem Nobody in AI Is Mapping where he explored what actually moves boards from observing AI to committing to it.

My instinctual response was to think ‘risk management’.

But the relationship’s not that simple. Risk doesn’t drive adoption directly, but it does provide levers to influence the flow of innovation, and usually for good reason.

Therefore, risk can be as much a catalyst for innovation as an inhibitor.

What is Risk Management?

Risk is defined as: “the probability of an outcome deviating from expectation”. It is the discipline, or systematic process of:

  • assessing and defining risk
  • modelling, testing and ranking risk
  • monitoring, auditing and reporting risk
  • critiquing, enhancing and adapting risk
  • flexing risk appetite

Risk is closer to engineering than intuition. Risk starts at the top and pervades operations to the bottom. Risk operates at the lowest level of operations and aggregates as it flows to the top.

What Does Risk Look Like Practically?

Risk exists at the strategic level – but it’s created and managed at the process level.

Take a simple example: A call centre employee updating a customer address at their request.

This single action introduces the risks associated with:

  • Unauthorised access
  • Incorrect input
  • Failed validation
  • Incorrect system processing

To arrive at a ‘visible’ risk, these risks are assessed for two things:

  • the likelihood of their occurrence (unlikely, likely, possible, probable)
  • the impact if they were to occur (High, Medium, Low)

When combined, they define the risk profile of the process usually illustrated using heat maps.

Processes combine to form ‘mega processes’ (eg. purchase to pay) where they attract risks from the general environment to arrive at aggregated risk at a more consumable level for upper management.

What this means for AI

A lot of companies are failing to see that AI adoption lives at the process layer where it is aligns to risk. Not only does it align with risk, but it aligns with the natural flow of business process which is universally aligned with typical business operations.

Adopting AI strategically is necessary but AI operates, like any technology tool, at the process layer where it completes operational tasks and are better controlled, directed and scaled.

The Relationship Between Risk and Trust

While risk is structured, scientific and predictable, trust is not.

Trust is shaped.

Risk flows through an organisation in an orderly fashion and promotes visibility, effective decision making and transparency.

This where AI Adoption is constrained or unlocked.

What This Means for AI?

AI is a tool. It’s not magic, but it does have capabilities that present challenges to safely integrate it into existing systems.

These challenges are only amplified when debated strategically without looking at the lowest level at which AI can be controlled, where it lives, at the process / task level.

This means that companies that understand how to maneuver within risk boundaries and can flex risk appetite safely, are more in tune with adjusting the levers of trust.

AI Adoption is shaped by people, relationships and the cadence of change that their collective effort can produce.

AI Adoption is enhanced through training, corporate mission, values and a strong culture.

AI adoption is enabled by alignment with risk. Without it, you’re not scaling innovation – you’re eroding trust.

Leave a Reply

Your email address will not be published. Required fields are marked *