E Book AI From Buzzwords To Boardrooms
E Book AI From Buzzwords To Boardrooms
to Boardrooms
Crafting Your Roadmap to Success
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Glossary of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Are you managing AI risks to your advantage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Key Resource: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The urgent need for revised risk management approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Redefining risk appetite in the age of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Proactive steps for organisations to take . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
How resilient is your organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Active testing: The foundation of resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Synchronising crisis response mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Navigating the unavailability of key resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Leveraging AI and emerging technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Generative AI in business: Data governance and ethical considerations . . . . . . . . . . . . . . . . . . . 7
Crafting Your Roadmap to Success with AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Recognising your starting point on the AI journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Crafting your roadmap to success with AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Identifying key stakeholderas in shaping the AI roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Implementing the AI roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Managing and evolving the AI roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Leveraging industry insights: The value of learning from others’ AI successes . . . . . . . 16
Quick guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Online Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Infographic: The AI journey - From exploration to implementation . . . . . . . . . . . . . . . . . . . . . . . 19
AI Ethics and Compliance Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Join our Masterclass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Introduction
Daniela Castro
Editor and Contributor
Daniela Castro on LinkedIn
Cathy Ford
Author and Contributor
Cathy Ford on LinkedIn
Tim Healy
Principal Author
Tim Healy on LinkedIn
Data Privacy: Ensuring that personal or Token: In NLP, a token is a single instance of
sensitive information is collected, stored, and a sequence of characters in some particular
used in a way that protects the information document that are grouped together as a
from unauthorised access or disclosure. useful semantic unit for processing.
As the world enters the era of artificial intelligence (AI), organisations find themselves at a
pivotal moment, reminiscent of the early days of the internet—a time filled with both vast
potential and uncharted territory. This exciting landscape brings anxiety for senior leaders and
board members as they navigate forward.
This moment calls for a fresh approach to risk management, questioning whether the past
strategies for managing ICT risks are sufficient for AI-driven challenges and opportunities.
The need for a new approach is especially evident given employees often has direct access to
popular AI tools. This accessibility of AI may lead to scenarios where the ICT department is not
consulted on the appropriate use of AI tools, thereby eliminating the opportunity to implement
appropriate risk management measures. management to be implemented. Although the
impact of AI varies across organisations and industries, it’s crucial to ask: Are you managing AI
risks to your advantage?
Key Resource:
The NIST AI Risk Management Framework (AI RMF) provides structured guidelines and
best practices for responsibly managing AI risks. This framework helps organisations
protect themselves, their clients, and employees from the adverse impacts of AI initiatives.
It is incumbent upon organisations to critically evaluate and, where necessary, adapt their
risk management frameworks to better align with the AI era. This reassessment is a strategic
imperative to ensure competitiveness and resilience in the rapidly evolving ICT landscape.
It is also crucial to update company policies and procedures to keep up with generative
AI advancements, revisiting vendor management and internal AI usage policies to ensure
compliance with the latest ethical standards and privacy regulations.
For organisations with an existing Risk Appetite Statement (RAS), a thorough review is
necessary to ensure it reflects the current and anticipated impacts of AI. Organisations should
carefully consider whether AI is categorised under the RAS categories of IT or Operational risk, or
whether it better falls under the organisations’ appetite for Strategic change. For organisations
not employing risk appetite statements, the opportunity exists to establish foundational
guidelines that inform the organisation of the type and amount of risk senior management is
willing to accept in pursuit of AI opportunities.
Be mindful of emerging regulations such as the EU AI Act and Canada’s upcoming Artificial
Intelligence and Data Act (AIDA). These regulations mandate transparency and ethical AI usage,
ensuring that AI implementations align with both legal and ethical standards.
Audit AI Integration and Use: Conducting a comprehensive audit of how AI is currently utilised
across the organisation will likely be enlightening, with more varied use than anticipated.
Understanding current and future planned AI use can illuminate risks, opportunities, and areas
for improvement.
Engage with Regulators: While regulatory oversight varies between industries in Australia,
engaging with regulators regarding AI is paramount. Understanding the perspectives and
guidelines from bodies such as the Australian Securities and Investments Commission (ASIC)
and the Office of the Australian Information Commissioner (OAIC) can guide compliance,
governance, and strategic alignment. For instance, both ASIC and the Australian Prudential
Regulation Authority (APRA) are looking more closely at AI within the financial services industry.
Proactively communicating with regulators about AI initiatives will likely foster a conducive
regulatory relationship. Given regulators themselves are grappling with AI and its challenges,
don’t be surprised if they proactively reach out to assess how your organisation is dealing with AI.
Convene Risk Workshops: Gather multidisciplinary teams to identify, assess, and strategise
on AI risks. Such collaborative efforts can unearth insights and forge consensus on the way
forward. Start with foundational questions about the adequacy of existing risk management
practices in the context of AI. Is there an AI-specific risk register, or are AI risks integrated into
the broader enterprise risk framework? Understanding the nature of AI risks, whether strategic
or operational, is essential.
It is assumed that your organisation already possesses well-documented and widely understood
plans for managing crises, business continuity (BCP), disaster recovery (DR), and cyber events.
If not, they should be developed as soon as possible. However, true resilience goes beyond
the existence of plans, no matter how detailed or technical. This e-book explores enhancing
organisational resilience by examining factors beyond initial planning, such as the critical
importance of active testing, preparing for worst-case scenarios like concurrent events,
addressing the potential unavailability of key personnel, vendors, and off-shore providers, and
the integration of AI and other emerging technologies.
Incorporating AI into Business Continuity Planning (BCP) can enhance organisational resilience
by providing scalable solutions like Robotic Process Automation (RPA) for rapid workforce
extension during crises. However, it is equally important to plan for scenarios where AI systems
may be compromised, ensuring contingencies are in place to maintain operational resilience.
By focusing on these areas, the aim is to provide insights into how organisations can not only
prepare for disruptions but also apply the axiom to “hope for the best but plan for the worst.”
Your organisation should plan for increasingly complex testing of its resilience as it matures and
learns from previous tests. Over time, additional stress points should be factored into the tests,
potentially without forewarning participants. For example, in testing a disaster recovery scenario
involving a major system outage, layering a major weather event (that potentially caused the
outage) can introduce complexities such as limited staff availability, core business disruption,
and supply chain issues. The availability of vendors and issues relating to offshore or outsourced
resourcing are also useful scenarios to be factored into the tests. Such complexities test how
quickly the organisation can pivot its plans as the event unfolds, ensuring that resilience plans
are not just theoretically sound but practically viable under the most challenging conditions.
Incorporating these increasingly complex scenarios into active testing routines stress-tests the
organisation’s resilience and promotes a culture of continuous improvement and adaptability.
It compels organisations to regularly review and update their crisis management strategies,
ensuring that resilience plans remain dynamic and reflective of the current threat landscape. By
applying an ever-increasing complexity to resilience testing, organisations stand prepared not
just for isolated events but for more complex challenges the modern world will likely throw at it.
The advent of generative AI technologies has significantly altered the landscape of the modern
workplace, introducing new capabilities and efficiencies previously unimagined. Reports
from the U.S. indicate that approximately 50% of employees are already integrating tools like
ChatGPT or Gemini into their daily tasks, with Australian figures on par or even surpassing
this. Despite the increased usage, trust remains a pivotal issue, as underscored by the “Trust in
Artificial Intelligence – A Global Study 2023” conducted by the University of Queensland and
KPMG. This study, which surveyed over 17,000 participants across 17 countries, found that only
40% of Australian employees trust AI, highlighting a significant gap in comfort and confidence
regarding AI’s role in meeting management, regulatory expectations, and balancing risk versus
reward. This hints at the likelihood that employees are using publicly available AI tools without
fully understanding them, especially data governance and privacy implications.
A good example of the ethical and privacy concerns surrounding generative AI usage in
corporate settings involves the handling of sensitive data. OpenAI’s Privacy Policy states that
transaction history, along with comprehensive user data, can be stored by OpenAI and shared
with OpenAI affiliates, yet the specifics of data retention periods and where data is stored
remain ambiguous. The platform may issue privacy warnings and discourage directly inputting
sensitive customer data if the question is asked within the chat, yet it simultaneously facilitates
the uploading and processing of such data via the upload attachment feature without clear
safeguards. This contradiction exemplifies the complex balance between leveraging AI’s
benefits and ensuring the protection of sensitive information.
Directors and Senior Executives should actively engage with their organisation’s Privacy Officer,
IT Department, Risk Manager, and other relevant stakeholders by asking the following set of
starter questions regarding the use of AI within their organisations:
• What data is being shared with AI? Direct this question to your IT Department and Data
Privacy Officer to understand the nature and sensitivity of the data being processed by AI
tools. This includes evaluating whether AI has been used to support the creation of report
content, what data was shared with the tools to produce the output, and the scrutiny of
the supplied results. Using a simple example, has an advanced version of a commercially
sensitive Board paper draft been provided to AI for re-wording and enhancement, and was
there any redaction of commercial or customer information prior to sharing with the AI tool?
• Are the AI tools used secure and private? This question should be posed to your IT Security
Team and Privacy Officer. Scrutinise the privacy policies of AI platforms to understand how
data is stored, shared, and protected. If a Privacy Impact Assessment (PIA) has not yet been
conducted by your organisation, request that one is performed urgently, and the results
shared with the Board. The PIA should also seek to establish where the data is being stored
and whether the organisation remains compliant with Federal data sovereignty laws and
any regulatory requirements within your industry. Asking such questions may result in
updates to your organisation’s Privacy Policy and practices.
• What controls are in place, and are they sufficient? Engage with your Risk Manager and
the team responsible for your organisation’s Data Governance Framework. Assess the
balance between harnessing AI’s full potential and implementing necessary guardrails to
protect the organisation and its stakeholders. There is no magic formula, but it is fair to
assume that either extreme end of the spectrum is not the place to be. The question is,
therefore, what controls do we already have within the organisation’s Data Governance
Framework, and whether or not these existing controls are sufficient to extend to the use of
AI tools.
• Can we trust AI output? This is a critical question for your IT Department and those involved
in data analysis and reporting. Organisations have well-established practices to screen,
recruit, and train the best employees available to them. The use of AI as a supplementary
or primary source of information used by employees is probably not subject to the same
scrutiny. In the context of AI tools like ChatGPT, which are updated with information up to
certain points in time (e.g., April 2023), how do we ensure the insights provided are accurate
and relevant and not subject to potential bias on the part of the tool provider? As part of
• How do we ensure ethical use of AI? Unless your organisation has an Ethics Officer,
this question is best directed to the corporate governance or risk area. Confirm that
guidelines and policies relating to the ethical use of AI tools are in place and align with
the organisation’s values and legal obligations. While industry-specific, this includes
considerations relating to bias, fairness, accuracy of results, and transparency in AI-
generated outputs such as providing support for decision making and price setting.
• What training and policy assurance is occurring? Direct this towards your Human
Resources Department and those responsible for staff training and compliance. Establish
what training and awareness initiatives have occurred to support staff AI usage. How often
are assurance or audit activities conducted to ensure ongoing compliance of staff to ensure
tools are used appropriately, in line with training and policies?
• Are AI tools worth paying more for? Another question for the IT team and senior managers
controlling the purse strings. Will the organisation benefit from adopting the enterprise
versions of tools such as ChatGPT and Gemini? If this has not been explored, it should be to
see what inherent protections may be provided to support improved data governance and
privacy that may not be provided in the popular free versions of the tools. For example, data
encryption, separate storage of data, and adherence to privacy regulations such as GDPR are
all available in user-pay models of both ChatGPT and Gemini.
Data Encryption In transit and at rest In transit and at rest In transit and at rest, with
options for enhanced
encryption (AES-256)
Data Isolation Data may be mixed Data may be mixed Strong data isolation ensuring
with other users with other users enterprise data is not mixed
User Data Control Standard deletion Standard deletion Advanced data control,
requests requests including deletion, access, and
audit logs
Security Audits Regular audits Regular audits Regular plus customized
& Penetration audits and penetration testing
Testing
API Security Standard API security Standard API security Enhanced API security,
features features including custom
authentication and
authorization options
Usage Limits Limited usage, with 50 messages every Unlimited usage, no caps
potential caps during three hours
peak times
Context Window Smaller context Larger context Up to 128,000 tokens
window window
Priority Access General access Faster response times Priority access to new features,
and Support dedicated customer support
The rise in AI usage underscores a pivotal moment for Directors and Senior Executives to
critically evaluate how these technologies are employed within their organisations. Ensuring
that AI tools enhance productivity and innovation while adhering to data privacy, security, and
ethical norms is paramount.
As we venture deeper into the era of generative AI in business environments, the landscape
is marked by both great potential and inherent challenges, particularly in the realms of data
governance and customer privacy. It’s clear that there isn’t a one-size-fits-all set of questions
to rely on to navigate these complexities. However, the act of questioning is indispensable.
Engaging in a diligent line of enquiry about the integration, implications, and governance
of AI technologies is crucial to ensure AI innovation co-exists in harmony with integrity and
accountability. By fostering a culture of thoughtful interrogation, senior leadership can help
businesses to not only leverage AI to its fullest potential but also ensure that they do so safely
and in alignment with government and customer expectations.
In another part of this e-book, the analogy was drawn between AI and the emergence of the
internet in the mid to late 1990s. AI, just like the Internet, represents unprecedented potential
and challenges for organisations. Over the next 5 to 10 years, AI will transform how businesses
operate, though predicting its exact impact is difficult given the rapid evolution of AI technology
and responses from industries, regulators, and governments. However, it is already clear that AI
is enabling organisations to innovate faster, streamline operations, and engage with customers
in new, albeit potentially less personal ways. In this environment of rapid change, defining an
AI journey is no small feat for organisations. It requires detailed and frequent strategic planning
and a clear understanding of where you currently stand and where you wish to go. This section
outlines practical steps and key considerations for crafting a successful AI roadmap that is
highly aligned with your organisation’s strategic goals.
It is important to acknowledge that each organisation’s AI journey has a different starting point.
Some organisations are at the dip-the-toes-in-the-water stage, still largely coming to terms
with what AI means and what it can do. Others are significantly more advanced, using AI to run
their operations and help make decisions. Understanding how AI is currently used within your
organisation is a critical first step for driving strategic and comprehensive AI adoption. Examples
of the forms AI may take within organisations include:
• Robotic Process Automation (RPA): RPA is widely adopted for automating routine, rule-
based tasks such as data entry and complex transaction processing. By mimicking human
actions within digital systems, RPA bots can enhance operational efficiency and accuracy.
Taking stock of the AI tools and technologies your organisation currently employs is a critical
first step in developing an overarching AI strategy. This initial review not only highlights AI’s
extensive impact on your operations but also identifies potential areas for further innovation
and value creation with AI. Moreover, it represents one of the key challenges in developing an AI
strategy or roadmap—that is, your organisation is already on its AI journey before it has clearly
decided on the direction it is heading and, to the best of its knowledge, what the destination
is. When you factor in the pace at which AI is evolving, this realisation demands a strategic
discussion on how the organisation can best try to stay in control of its AI journey to achieve a
more strategic approach to AI adoption.
Before diving into the specifics of an AI roadmap, it’s essential to stress that the AI journey exists
to support and enhance your organisation’s core strategy. Unless you are OpenAI, Google, or
a similar company selling AI products, your AI roadmap will be a journey of adding capability
to your organisation to accelerate and achieve the organisation’s strategic objectives. The
development of an AI roadmap becomes a deliberate process of aligning AI capabilities with
strategic priorities, ensuring that every step taken in AI adoption directly contributes to the
organisation’s overarching goals.
Prioritise projects based on impact and feasibility. Evaluate potential projects based on
their expected value to the organisation, implementation complexity, and alignment with
strategic goals.
Determine infrastructure and technology needs. Identify the technological requirements for
implementing AI solutions, including all elements of architecture (hardware, software, and
cloud services). Consider both current capabilities and future needs.
Prepare for organisational change. Implementing AI will require changes to processes, roles,
and culture. Your AI Roadmap should include the organisational change management
initiatives required to support the successful adoption of AI.
Consider AI as a vital component of your BCP. For example, Robotic Process Automation
(RPA) can serve as a means of rapidly training and extending your workforce capacity,
offering scalability to ramp up operations in times of resource constraint or operational need.
In this sense, AI can be viewed as enabling operational resilience.
The flip side of this, equally important to factor into your AI Roadmap development, is that
use of AI can also pose a risk to business continuity if inadequate redundancy and provisions
for prolonged AI outages are not adequately catered for.
Either way, it is important to ensure that your AI roadmap calls out BCP planning and
development to mitigate the risks and better support your organisation’s resilience.
A detailed article covering business continuity, including AI threats and opportunities, can
be found here.
Implement monitoring and evaluation processes. Establish metrics and KPIs to measure the
performance of AI projects against objectives. Regularly review and adjust projects based on
performance data and evolving business needs.
Develop ethical guidelines for AI use. With the rapid adoption of AI technologies, there is an
inherent risk that ethical considerations and regulatory compliance may not automatically
be front of mind for employees engaging with AI. It’s imperative, therefore, to proactively
incorporate ethical guidelines, policies, and compliance measures into your AI Roadmap.
This includes staying informed about existing and emerging regulations that are pertinent
to your industry, ensuring that your roadmap is not only internally sound but externally
compliant. It is essential to address and mitigate ethical risks associated with AI, such
as biases in decision-making, privacy considerations, and lack of transparent usage. By
establishing clear principles for responsible AI use and promoting widespread awareness
within the organisation, you can foster an ethical AI culture that aligns with both your
strategic objectives and societal values.
Your organisation’s I.T team and/or key I.T outsource partners will play a key role in delivering
the AI Roadmap. However, defining and delivering the roadmap are very different things. Given
the previous point made about the AI Roadmap existing to help deliver strategic outcomes, a
much wider audience must be involved in both the crafting and delivery of the AI Roadmap.
A collaborative approach spanning the organisation will be required to ensure the roadmap is
sound and delivered. Key stakeholders to consider include:
• Strategy Team: Since the AI Roadmap exists to support strategy, the Strategy Team
should play a central role in defining the direction of the AI initiatives. Members of the
strategy team need to acquire a deeper understanding of AI than they currently possess.
This may require targeted training sessions to fully grasp AI's capabilities and limitations,
and potentially introducing talent into the team either from outside the organisation or
via transferring expertise from the I.T or innovation functions. This will be important to
ensure Strategy can provide a significant voice in AI adoption to ensure it stays focused on
delivering the organisation’s objectives.
• I.T and Innovation Teams: While I.T's involvement is crucial for addressing the technical
feasibility and implementation of AI solutions, their collaboration with the Strategy Team
ensures that technological deployments are not just technically sound but also strategically
focused. Upfront effort should be made to ensure that the I.T and Strategy teams are fully
aligned and working towards a common goal.
• Legal, Risk and Compliance Teams: Given the ethical, privacy, and compliance issues
surrounding AI, involvement of representatives from these teams is needed to ensure AI
initiatives adhere to all relevant laws, regulatory, and ethical standards. These teams may
also need to act as a conduit to communicate with regulators and industry bodies regarding
AI and its usage within the organisation.
While this e-book is more focused on defining the AI roadmap, there are important delivery
considerations worth calling out, as recognising these factors when defining the roadmap will
help drive success.
Phased Implementation Approach: With the rapid pace of AI development, it is highly likely
that many revisions of the roadmap will occur along the journey. A phased approach to
delivery, focusing on delivering smaller, discrete capabilities, is preferable to broadly scoped,
prolonged initiatives that will likely change or become defunct in the short to medium term.
Adopting a phased approach allows for flexibility, learning, and adjustment as AI continues to
rapidly evolve.
Resource Allocation: The AI Roadmap should emphasise the importance of allocating the
necessary resources, including budget, personnel, and technology, to support the successful
implementation of prioritised AI projects.
Monitoring and Evaluation: It is important to recognise the very high likelihood of changes
to the AI Roadmap over time. This may be due to changes in organisational strategy but
more likely due to the rapidly evolving nature of AI. Strategies for ongoing monitoring and
evaluation of AI initiatives against organisational objectives and AI advancements should be
factored into the AI Roadmap. This implies regular checkpoints need to be included within the
roadmap to allow for re-evaluation of strategic priorities and AI capabilities.
Operational Demands: Emphasising the need for strategic resource allocation across the AI
project lifecycle is crucial. As AI initiatives are delivered, the operational demands of managing
these can divert resources and focus away from future AI initiatives, creating an “operational
drag” that could potentially slow AI Roadmap delivery. To mitigate this, the AI Roadmap
should plan for the resourcing of AI initiatives from start through to post-implementation
support. One potential way to achieve this is to plan for dedicated teams for the ongoing
operation of AI solutions. This ensures a steady focus on new developments in the roadmap
while ensuring the performance of existing AI systems.
Learning and Adaptation: Stress the importance of fostering a culture of learning and
adaptation, where insights from AI projects are used to inform future initiatives and strategy
adjustments. These should be included as activities within the roadmap to ensure that lessons
can be learned and applied to future initiatives.
Whether you are defining or actively delivering your AI roadmap, there is value in regularly
looking outside of the organisation for inspiration and guidance. While each company's AI
journey is unique, the lessons learned from those who have successfully navigated similar paths
can provide critical insights. As AI continues to evolve and organisations embrace it, there will be
an increasing volume of ideas and cautionary tales available to distil into key learnings to factor
into your organisation’s AI Roadmap. Encouraging your strategy and implementation teams to
regularly review case studies from within and outside of your industry can help identify proven
strategies, realistic delivery timeframes and approaches, and methods of problem-solving
you may otherwise not have considered. Integrating these learnings into your AI roadmap not
only provides a broader perspective but also helps mitigate risks by leveraging the collective
experience of the wider business community.
Quick guide
• Acknowledge the unique starting point • Assess data quality and infrastructure
of your organisation’s AI journey, ranging requirements to support AI
from initial exploration to advanced implementation effectively.
implementation.
• Involve Legal, Risk, and Compliance
• Seek to understand the types of AI teams early in the AI journey to address
available to your organisation – from potential risks, ensure regulatory
chatbots, virtual assistants, generative compliance, and navigate ethical
AI tools, RPA, through to specialised, considerations effectively.
industry-specific applications.
• Develop ethical guidelines for AI
• Ensure the AI roadmap development use, addressing biases, privacy, and
is strategy-led, not IT-driven, to transparency to foster an ethical AI
ensure alignment of AI initiatives to culture.
the organisation’s broader strategic
objectives. • Set up governance structures and
implement monitoring and evaluation
• Ensure AI Roadmap initiatives align processes to oversee AI initiatives
with and directly contribute to the effectively.
organisation’s overarching strategic
goals. • Identify skills, talent needs, and
organisational changes required to
• Involve key stakeholders across the support AI adoption and develop an AI-
organisation, including Strategy, IT, and ready culture.
HR teams, to ensure a comprehensive,
collaborative approach to strategic AI • Prepare for frequent and significant
adoption. changes of direction to your AI Roadmap
as AI continues to rapidly evolve
• Conduct a thorough review of current and provide new opportunities and
AI usage within your organisation as challenges.
a critical first step in developing an AI
strategy. • Draw inspiration from industry success
stories to inform your AI strategy and
• Map out potential AI use cases and navigate the evolving landscape of AI
prioritise projects based on impact, adoption.
feasibility, and strategic alignment.
We have explored the transformative power of artificial intelligence (AI) and how it can be
strategically integrated into your organisation to achieve core objectives and drive innovation.
As AI continues to evolve, organisations must navigate both unprecedented opportunities and
significant challenges.
By following these practical guidelines and insights, your organisation can navigate the com-
plexities of AI adoption with confidence, ensuring that AI not only drives innovation and efficien-
cy but also aligns with your overarching strategic goals. The journey from buzzwords to board-
rooms requires thoughtful planning, continuous learning, and strategic execution. This e-book
provides the tools and insights needed to embark on this transformative journey and achieve
lasting success with AI.
Books
1. "Artificial Intelligence: A Guide for Online resources
Thinking Humans" by Melanie Mitchell 10. OpenAI - openai.com - A leading
- This book provides a comprehensive research institute and company that
overview of AI, explaining its capabilities develops advanced AI models, including
and limitations in an accessible way. GPT-3.
2. "Life 3.0: Being Human in the Age of 11. AI Ethics Guidelines Global Inventory
Artificial Intelligence" by Max Tegmark - - ai-ethics-guidelines.org - A
A thought-provoking book that explores comprehensive collection of AI ethics
the impact of AI on the future of life on guidelines from around the world.
Earth.
12. Kaggle - kaggle.com - A platform for
3. "Prediction Machines: The Simple data science competitions and datasets,
Economics of Artificial Intelligence" useful for practicing and learning
by Ajay Agrawal, Joshua Gans, and Avi machine learning and AI.
Goldfarb - This book discusses how AI
changes the cost structure of predictions 13. Towards Data Science -
and the implications for business and towardsdatascience.com - An online
society. publication sharing concepts, ideas, and
codes related to data science and AI.
4. "The Fourth Industrial Revolution" by
Klaus Schwab - A book that explores the 14. arXiv - arxiv.org - A repository of
technological revolution and its impact on electronic preprints (known as
industries and societies. e-prints) approved for publication after
moderation, covering areas including AI,
5. "AI Superpowers: China, Silicon Valley, machine learning, and data science.
and the New World Order" by Kai-Fu Lee -
An insightful look at the global AI race and
its implications for the future. Reports
15. "The Future of Jobs Report 2020" by the
Articles World Economic Forum - A report that
provides insights into how AI and other
6. "The Malicious Use of Artificial technologies are transforming the job
Intelligence: Forecasting, Prevention, market. Available online: weforum.org
and Mitigation" - A collaborative report
by researchers from various institutions 16. "Artificial Intelligence and the Future
that discusses potential risks of AI and of Work" by McKinsey & Company - A
strategies to mitigate them. Available report exploring the impact of AI on
online: arxiv.org various industries and job roles. Available
online: mckinsey.com
7. "Trust in Artificial Intelligence – A
Global Study 2023" by the University 17. "AI in Business: The State of Play and
of Queensland and KPMG - This study Emerging Trends" by Deloitte - A report
provides insights into global trust levels in that discusses current AI applications
AI and its implications for businesses. in business and future trends. Available
online: deloitte.com
8. "The Ethics of Artificial Intelligence" by
Nick Bostrom and Eliezer Yudkowsky
- An article that explores the ethical By exploring these references and further
considerations of AI development and reading materials, you can gain a deeper
deployment. Available online: nickbostrom. understanding of AI, its applications,
com ethical considerations, and the strategic
implications for your organisation.
9. "Artificial Intelligence and the End
of Work" by Matthew Cole - An
article examining the impact of AI on
employment and the future of work.
Available online: cambridge.org
Don’t miss this opportunity to learn from Cathy Ford and other
industry experts. Equip your organisation with the knowledge
and tools needed to implement effective AI governance and drive
sustainable growth.