Episode 6: intereach AI Adoption
This episode sets out typical AI adoption strategies with commentary on risk and effectiveness, and suggests the best combined method for Intereach to follow based on its AI maturity. It also provides steps and timelines for implementation.
Common AI Adoption Pathways
8 common AI adoption pathways are identified below, each is described with a risk and benefit annotation:
- Path 1: Staff Shadow Adoption: Adopted informally by staff, often lacks governance and carries high risk with limited benefits unless directed.
- Path 2: Vendor Implementation: AI features activated within existing tools; generally low risk and offers the potential of organisational returns.
- Path 3: Governance Driven: Controls set before identifying practical AI cases, posing high risk of stifling innovation.
- Path 4: Strategic Experimentation: Pilot programs support targeted, structured AI learning; low risk if managed well and likely to deliver long-term value.
- Path 5: Hype Motivated: Adoption for image purposes, driven by executive need to appear innovative; high risk and often minimal return.
- Path 6: Competitive Response: Reacting to market announcements by competitors often leads to rushed, high-risk adoption with ultimately little measured expected benefit.
- Path 7: Customer Demand: Reactive uptake as customers request AI services; unlikely to drive intereach AI adoption in the medium term.
- Path 8: Efficiency/Cost Reduction: Pursuit of cost savings through efficiency; medium risk and moderate returns, depending on alignment with business needs.
intereach Approach
- Given the numerous potential approaches to AI adoption, it is understandable that many organisations pursue a combination of strategies. Organisations with limited experience in AI often exhibit recurring patterns as outlined below. For intereach, an organisation progressing in AI maturity, the recommended approach is also described.
Immature Organisation
- Path 5 Hype motivated and often the strongest element of AI change.
- Path 1 Shadow adoption by staff which just "happens anyway".
- Path 3 Governance panic, a reactive response often mirroring actions from Path 5
AI Maturing intereach
- Path 1 Shadow adoption but enabled and led by the executive
- Path 4 Strategic experiments focused on business improvement allowing deeper AI learning
- Path 2 Vendor embedded AI, again organisationally led but allowing capture of quick wins where both available and meaningful.
intereach Path 1: Shadow AI
Sanction AI staff experimentation with clear guardrails and capability guidance to ensure organizational safety (see AI Prompt), aiming for a measurable increase in AI engagement over 6–12 months.
NOW (0-2mths): Share AI usage guidelines with all staff, set up a feedback channel for AI questions, and offer simple prompt based experiments (e.g. ChatGPT cheat sheet) relevant to intereach tasks.NEXT (2-4mths): Launch formal digital training on Microsoft apps and advanced Copilot use; record sessions for LMS inclusion.LATER (4-9mths): Appoint AI champions in each department, similar to a typical data governance structure.
- Document emerging use cases and assess value
- Support colleagues with troubleshooting and best practices
- Identify risks and escalate concerns
- Recommend use cases for broader adoption
intereach Path 4: Strategic Experiments
Intereach should focus on business improvements that generative AI can deliver, particularly in processes or services with high staff overhead due to repetitive or time-consuming tasks.
NOW (0-2mths): Identify a Top 5 backlog and clearly define each business problem, such as large grant applications, board reporting, and data insights gathering.NEXT (2-4mths): Build an experimentation framework (see AI Prompt), assign staff and resources, set clear objectives, and establish governance to help quickly end unproductive trials or formalise successful ones.LATER (4-6mths): Launch the first timeboxed experiment, evaluate results, and iterate as needed.
intereach Path 2: Vendor Embedded AI
intereach has been updating its software support capabilities over the past 12-18 months across both customer-facing and back-office areas. Cloud service vendors are swiftly introducing generative AI tools to their services (see AI Prompt), which may offer potential benefits for intereach.
NOW (0-1mths): Audit the AI roadmaps of key technology providers (e.g., Microsoft, Employment Hero, Talkdesk, Lumary). Create a three-stage timeline for emerging AI features: Available Now, Coming Soon, Longer Term. Evaluate each feature's business value and adoption ease.NEXT (2-3mths): Appoint an AI change champion to assess the top 3 vendor opportunities. Present risks and benefits—including commercial, legal, and cyber considerations—to executives for each opportunity before deciding on adoption.LATER (4-6mths): Roll out selected vendor AI quick wins. Review outcomes and update the adoption roadmap as needed.
AI Prompts for further research...
"How can organizations identify, assess, and legitimize valuable AI tools and practices that staff are already using informally (shadow AI), transforming grassroots innovation into organizational capability while managing the risks of unsanctioned AI adoption?"
"How should medium-sized organizations structure their AI experimentation efforts to balance innovation with governance, including who should lead experiments, how to select projects, what resources to allocate, and how to evaluate success?"
"How should organizations assess AI features being added to cloud services they already use (like Microsoft 365 Copilot, Salesforce Einstein, or Google Workspace AI) - including whether adoption is optional or inevitable, what new risks emerge, cost changes, and whether the integrated AI delivers better value than standalone alternatives?"