AI is redefining the business environment in real-time, demanding a disciplined re-evaluation of governance structures.
As innovation accelerates, governance functions in particularly regulated industries are set to shift from an annual ‘set and follow’ approach to a frequently reviewed, dynamic framework. This, to effectively manage and understand the new layers of risk the growing adoption of AI introduces.
When responsibly deployed, AI is arguably changing our world for the better. In a rapidly evolving business environment, transparency, accountability, ethics and consistency are key drivers of the human, and by extension, the customer experience.
Leading financial services institutions are frantically working to enhance the customer experience journey through the accelerated adoption of AI and automation.
As businesses and individuals race towards a future powered by AI, automation, and technology-backed innovation, the call to protect and uphold our human values grows louder. The Financial Services Conduct Authority also sounded a caution “…we must ensure that hyper-personalisation does not become the byword for hyper-exploitation and that generative AI chatbots are not mis-selling complex products to consumers who do not understand them.”
For executive teams, the potential productivity and profitability gains from fully embracing generative AI and process automation, are real. This played out in a recent announcement by Microsoft that AI saved more than $500m of costs in one year. Microsoft also announced 15000 job cuts. The connection is tempting. Similarly, Salesforce noted that 30% of internal work at the company is being handled by AI, allowing it to reduce hiring for some roles. Through the use by Microsoft of its Copilot AI assistant, Microsoft also noted that each salesperson is finding more leads, closing deals quicker and generating 9% more revenue. The allure of using AI technology to inform opinions, rationalise arguments and improve productivity and profitability is undeniably attractive.
Turning to some of the key risks that organisations must consider when implementing an AI strategy, top of mind are factors such as the alignment of AI initiatives with organizational policies, the over-reliance on AI that may stifle critical thinking, the quality of the data that a model ingests, the monitoring of the potential for models to become less accurate over time if they are not recalibrated, regulatory and legal risks (for example, the risk of AI applications contravening data privacy laws/regulations), cyber security risk and lastly, AI systems unintentionally reinforcing inherent biases.
Governance teams at major firms are rightly concerned: unchecked AI adoption combined with rising cyber risks could lead to serious operational incidents and a consequent erosion of public trust. And when public trust is threatened, corporate governance must respond, preferably proactively.
Illustratively, data protection offers an excellent testing ground for new thinking in AI adoption. At a human level, we understand that the protections introduced by legislation like the EU GDPR and South Africa’s Protection of Personal Information Act (POPIA,) are necessary guardrails to guide responsible data collection and the management of the access to data and the use thereof. Giving AI applications unfettered access to this data, however, risks losing control over how we gather, store, use and protect it. Systems are further exposed to looming cyber security threats.
In a 2024 edition of the Harvard Business Review (a Harvard Business School affiliate), a suggested approach to addressing the erosion of public trust due to AI and machine learning initiatives, has been coined ‘Trusted AI’. This concept states that there are “levers of trust” which if employed would significantly increase – rather than reduce – public trust of AI and related programmes. Among others, they include “accountability, transparency, consistency and empathy” which if not harnessed, HBS argues, will hinder successful stakeholder relationships including with employees and customers, regardless of the growing reliance on AI.
At a recent discussion at a side event of the G20 Finance Track, which took place in Zimbali in KwaZulu-Natal, South African Reserve Bank Governor Lesetja Kganyago whilst discussing AI summed it up beautifully “…Technology will change. The tools we use will evolve beyond recognition. But our mandate remains constant [emphasis added]…” Similarly, from an enterprise risk perspective, the foundation for safe AI adoption enterprise-wide will continue to be a robust governance framework, which is rooted in the ethical values of a business.
Governance frameworks should be flexed to focus the lens on requirements such as evidence on how AI is used, tested and maintained, at which stage do processes include a “human-in-the-loop” for critical decisions, when, how and how often AI models are tested for bias, can decisions taken by AI be explained to users (transparency) and is there clear accountability (who is responsible when AI influences a decision that goes wrong?). Lastly, do training programmes include the adequate upskilling of teams, to work responsibly with AI?
In the interest of accountability and transparency, continuous disclosure on the reliance on AI in an organisation, is vital.
Stewards of corporate governance and risk management, tasked with protecting human-based legacy frameworks, must continue to revert to value systems – ethical and moral foundations of human values – to defend brands led by a purpose-driven approach.
Article By Nazrien Kader, Old Mutual Limited





