ARTICLE 31 Oct 2024

What Are Investors Expecting from Boards Regarding AI?

The complexity and rapid growth of AI introduces significant challenges for boards.

11 Final (2)

Artificial Intelligence (“AI”) has rapidly transformed from a futuristic concept into a driving force behind innovation across industries. For investors, AI is now a key factor shaping corporate strategy and risk management. Boards are expected to understand both the opportunities and the risks AI presents.

The complexity of AI introduces significant challenges for boards. The rapid pace of AI development creates uncertainty around potential disruptions to business models. Harvard Business Review points out that “boards need to take AI seriously,” as companies face increasing pressure to integrate AI effectively while also managing the risks (Source). Boards are now expected to implement strong governance frameworks to address these uncertainties.

The Growth of AI 

AI’s development from research to boardrooms has been fast and transformative. As Impax Asset Management notes, AI has a “double-edged role in the sustainability revolution,” offering the potential to solve global challenges while also posing risks if not implemented properly (Source). Investors are increasingly aware of AI’s potential to drive both innovation and disruption.

Certain industries have seen more significant AI-driven changes than others. Sectors such as finance, healthcare, and infrastructure have embraced AI to improve decision-making and efficiency. Fidelity Investments mentions that AI's opportunities extend beyond tech, with industries like healthcare using AI for improved diagnostics and treatment, and investors are eager to see how companies in these sectors continue to benefit from AI (Source).

There is also concern among investors about AI becoming over-hyped. ERSTE Asset Management recently questioned whether the AI rally might be nearing its end, highlighting concerns about inflated market expectations (Source). Investors expect boards to be cautious about getting swept up in AI hype and instead focus on realistic, long-term strategies.

Responsible AI and Industry Expectations

AI introduces substantial ethical challenges, especially concerning data privacy, algorithmic bias, and job displacement. Fidelity Investments has already highlighted this, mentioning that the industry has faced lawsuits related to AI’s potential risks. These include issues such as false claim denials, privacy breaches, and concerns over reliability and trust, which are common across various sectors (Source).

As detailed in its recent Engagement Guide on Artificial Intelligence, International Corporate Governance Network (“ICGN”) encourages a constructive investor-company dialogue on this fast-evolving and increasingly important topic, emphasising on the need for responsible AI and the following pillars:

  • Board Oversight: Boards should be able to explain to investors the extent to which the company approaches AI as a risk or as an opportunity, and its short, medium, and long-term plans for integrating AI as part of its business model, while ensuring they are properly equipped to oversee AI related risks and opportunities.
  • Responsible AI Practices: Companies should implement AI in a way that preserves trust in the company and prevents, as far as reasonably possible, economic, human, social, and environmental harm.
  • Risk Management: Boards should ensure that companies have strong risk management processes to identify, assess and mitigate financially material AI-related risks, as well as potential adverse impacts on society and the environment.
  • Transparency and Explainability: Management should be able to explain to their boards how the AI systems they develop, or use have been designed, trained, tested and scaled, and how they align with human values and intent. Furthermore, all companies should be transparent about how the AI systems they deploy collect, use, and store personal data.
  • Regulatory Compliance: Boards should ensure that company management implement existing regulation and relevant standards on responsible AI, such as OECD’s AI Principles, UNESCO’s Recommendation on the Ethics of AI, or ISO/IEC 421001:2023, which specifies requirements for establishing, implementing, and improving AI management systems in an organisation.

Investors’ Expectations

Harvard Business Review notes that boards must prioritise AI oversight in their long-term risk management plans (Source). This shift in focus reflects a growing recognition that AI is not just a tool for innovation but also a potential risk if not governed properly. Companies that demonstrate clear AI governance frameworks, including risk assessments and impact evaluations, are likely to gain stronger investor support.

Using the AQTION Platform, we highlight the below expectations communicated from five large investors:

1. Norges Bank Investment Management ("NBIM")

NBIM stresses the importance of responsible AI for well-functioning markets and the validity of products and services. They advocate for comprehensive regulations that foster safe AI innovation while mitigating risks and notably highlights below key elements for Responsible AI (Source):

  • Board Accountability: Boards should oversee AI governance and strategy, ensuring an appropriate balance between technological advancement and societal impact. This includes aligning AI policies with international standards like the OECD AI Principles and UNESCO’s Ethics of AI guidelines.
  • Transparency: Companies should be clear about how their AI systems work, ensuring alignment with human values. Transparency allows stakeholders to assess the systems' impact, accuracy, and reliability and helps build trust.
  • Risk Management: Effective AI risk management should be proactive and transparent, addressing risks to businesses, individuals, and society. AI systems, guidelines and risk management processes should also be independently verified and regularly audited over time.

2. BlackRock

For companies that deploy generative artificial intelligence (GenAI), BlackRock will seek to understand how their boards are building sufficient fluency in AI – including remaining abreast of technological, strategic, and regulatory developments.

BlackRock will also seek to understands how boards stay informed of management’s strategic decision-making and to oversee the company’s evaluation of GenAI’s impact on key stakeholders and navigation of any associated risks.

BlackRock has seen positively  companies adopting principles-based approaches to their disclosures, detailing how they incorporate GenAI into their business strategies and notably focusing on accountability, inclusivity, fairness, transparency, security, and reliability (Source).

3. Federated Hermes

Federated Hermes expects companies to implement robust governance and policies over AI and disclose the range of purposes for which they use algorithmic systems; explain how they work, including what they optimise for and what variables they consider; and enable users to decide whether to allow them to shape their experiences.

Federated Hermes also offers an engagement framework with companies based on six core principles (Source, Source):

  1. Trust: Companies should educate users about data privacy rights, give them control, and obtain informed consent.
  2. Transparency: Companies should be open about data tracking, disclose how they measure the robustness of data governance, and inform users when data is used for scoring and screening.
  3. Action: Companies should address potential negative societal impacts of AI, such as data and process bias, and take proactive steps to mitigate risks.
  4. Integrity: Companies should ensure data quality for training AI systems, mitigate bias, and safeguard against manipulation or misuse.
  5. Accountability: Companies should establish clear accountability for AI systems, including governance and oversight, and take responsibility for impacts on stakeholders.
  6. Safety: Companies should prioritise human safety in AI development and use.

4. Legal & General Investment Management ("LGIM")

LGIM believes investors must engage with companies on baseline expectations for AI governance, risk management, and transparency. They outlined their expectations to which companies should dedicate resources in proportion to their risk exposures and business models (Source, Source):

Governance

  • Name a board member or committee responsible for AI risk oversight and strategy
  • Provide the board with education on business-specific AI risks at least annually.

Risk Management

  • Conduct product safety risk assessments across the business cycle, including on human rights.
  • Ensure AI systems are explainable, allowing the board and relevant functions to describe inputs, processes, and outputs.
  • Identify high-risk AI systems or inputs and describe mitigation efforts.
  • Build trust by soliciting input on high-risk AI systems from third parties and civil society.
  • Provide reasonable paths to give feedback or seek remediation if AI systems cause harm.

Transparency

  • Disclose governance policies and risk processes regularly.
  • Make it clear to customers and civil society when AI systems are used.

 

5. abrdn - Aberdeen Standard Investments

abrdn focuses on engaging with companies to promote responsible AI development and use, aiming to create sustainable benefits for shareholders and other stakeholders. They highlight that AI lacks essential ethics and requires clear governance and oversight to ensure outcomes align with qualitative objectives and sustainable value creation. Hence, they believe companies with significant exposure to AI must demonstrate (Source):

  • Robust Governance and Oversight.
  • Ethical Guidelines.
  • Appropriate Due Diligence.
  • Transparent Practices.

 

For more information as to which investor communicates on AI, please reach out to enquiries@aqtion-platform.com.

Conclusion

As AI continues to shape corporate strategy, investors expect boards to take a more proactive role in overseeing AI’s integration. The focus is no longer just on AI’s potential for innovation but also on how companies are managing the ethical, social, and regulatory risks it brings. Moving forward, AI governance will remain a critical area where investors demand transparency, accountability, and responsible innovation from boards.