Episode 2: AI Risk
This episode sets out the 3 broad areas of Artificial Intelligence (AI) risk that business leaders must be aware of when considering the adoption of AI solutions. The blog has 4 sections introducing the risk landscape generally, digging deeper into each of the 3 areas and providing intereach a use case to aid understanding. An AI prompt section is provided for further exploration.
"Remember when 95% of companies investing in AI reported zero return? It's not because AI doesn't work - it's because most organizations don't understand the risks they're taking on. Before you deploy that next AI solution, you need to understand three critical categories of risk..." -- MIT & Claude.ai
3 Areas of AI Risk
For intereach, or any organisation, a risk managed adoption of an AI solution depends upon 3 factors:
- Model Risk: The appropriate selection of the underlying AI Model that will be used to deliver a specific business outcome.
- Implementation Risk: The manner in which the AI model is deployed within the organisation
- Usage Risk: The way the broader AI solution is presented for use and the organisational governance structure that accompanies it.
Model Risk
Sources of AI model risk include:
- Model Training Flaws: If intereach were to use an AI model ("off the shelf") that was "trained" (see AI Prompt) using a biased or incomplete data set then the model would not be representative in all situations presenting a range of potential operational and ethical challenges.
- Poor Data Labelling: AI models rely on structured data; structured data is by definition "labelled". Labels (or tags) are used to help AI models identify the data on which they are trained - if the labels are wrong or incomplete, the AI learns incorrect data associations again leading to incorrect or misrepresented model outcomes.
- Model Drift: An AI model must dynamically adapt to change, when it doesn't the model is said to "drift". What was accurate at a previous point in time may become outdated and in many circumstances this is inevitable without active management of the model.
- Hallucinations & Accuracy Issues: AI models can confidently generate completely false information, this is a well documented risk with public AI models like ChatGPT where it is even possible to seed models with false information. AI can't self police as it is only able to iterate on the data and training it has access to at a point in time.
RISK RESPONSIBILITY: As intereach do not train their own AI models the internal organisational responsibility for these types of risk lies with designated vendor managers who must regularly interrogate third party suppliers about their AI modelling standards and approach.
USE CASE: For intereach Model Risk is an important but less well understood source of organisational risk. Almost certainly intereach are using AI functionality within vendor solutions now; some vendors like Talkdesk (e.g. voice recognition AI) will be able to update their solutions with ever more sophisticated AI models trained on large bodies of public data. Other vendors, who are quickly adding AI models to existing solutions, risk deploying AI models that become fully private within their solution environment and rely primarily on a much smaller pool of client data for dynamic adaptation.
Implementation Risk
Sources of AI implementation risk include:
- Unstructured Data Access: Proper access controls are required if intereach were to implement an AI solution across internal data environments or risk inadvertently exposing private information to the wrong people or leak sensitive data across departments.
- Poor Data Indexing: As suggested in the previous blog if intereach's data isn't properly structured and tagged an AI model will search in the wrong places and return incomplete or irrelevant results.
- Integration Failures: In a scenario where intereach implement an AI model that steps across organisational systems planned integration becomes important or there is risk of creating incomplete or broken workflows (and disaggregation of the underlying data).
- Inadequate Monitoring: Depending upon the business scenario it can be important to actively track AI model performance ("drift detection" - see AI Prompt), so bad AI decisions are detected before organisational damage occurs.
- Vendor Lock-In: Over-dependence on a single AI solution risks leaving intereach unable to adapt as technology evolves or business needs change.
RISK RESPONSIBILITY: For intereach the responsibility for managing implementation risks will lie with the project teams tasked with rolling out new system or system augmentations. It would be expected that the internal technology team would also have direct responsibility in this area.
USE CASE: The most likely first encounter with Implementation Risks for intereach (as has been the case for many organisations already) is the introduction of more dynamic AI solutions into the back office Microsoft computing environment. Its likely intereach have much private data sitting in email inboxes, team chat records and shared folders that would become easily discoverable by AI without the right implementation approach.
Usage Risk
Sources of AI user risk include:
- Data Leakage into Public Models: intereach employees copying sensitive business information into ChatGPT or other public AI tools (whether inadvertently or deliberately) can lead to the irretrievable sharing of customer data and other sensitive information with the world.
- Black Box Decisions: May occur where an intereach participant or staff member is unable to understand how an AI driven business process has reached its conclusion leading to loss of trust with the organisation (see AI Prompt). This would be especially dangerous if AI was introduced into intereach service lines where participant clinical decision making was occurring. (NOTE: This risk should also be consider as part of AI implementation)
- Over-Reliance & Deskilling Teams: Anticipates a future operating model where intereach staff have become too heavily dependent on AI and no longer have the capability to apply critical thinking. Should the AI solution fail this would introduce a business continuity risk for the organisation.
- Misrepresentation & False Credentials: This risk considers the situation where staff (and in some cases customers) take AI-generated content and present it as their own work. At stake here are issues of competence and accountability, who's responsible when AI-generated advice goes wrong?
RISK RESPONSIBILITY: Despite the hype the move toward a more AI centric work environment can be genuinely seen as a paradigm shift akin to moving from a craft based to a manufacturing economy. As a result these risks are most appropriately managed by the intereach leadership team as they position the organisation in this transformed working environment.
USE CASE: As noted many employees at intereach already use private AI models (e.g. chatGPT) to improve personal efficiency and this risk needs to be more carefully weighed. A likely emerging Usage Risk use case is the black box decision risk - as more Agentic decision making is deployed it will become important for participant facing staff to have a much deeper understanding of the intereach service blueprint to better assist participants make sense of their service journey.
AI Prompts for further research...
"How does bias or incompleteness in training data translate into real-world failures or harm when AI models are deployed in business operations?"
"Can you tell a simple story that illustrates what model drift is and why organizations need to monitor for it, without using technical jargon?"
"What are the most significant real-world examples from service industries where black box AI decision-making caused problems, and what were the specific consequences for customers, employees, and the organizations involved?"