Page content

Centre for Legal Technology (CLT) member Dr Adam Buick is contributing his expertise in AI regulation to an innovative research project examining internal audit functions for frontier artificial intelligence developers.

As a Research Associate with the AI Governance Taskforce, Dr Buick is working alongside a multidisciplinary team, advised by Aidan Homewood of the Centre for the Governance of AI, to address critical questions about how internal audit can strengthen governance and reduce systemic risks in frontier AI development. The AI Governance Taskforce is a 12-week, global programme run by Arcadia Impact which aims to develop participants’ AI governance knowledge, skills and work portfolios through producing relevant policy research contributions in small teams.

The research explores how internal audit functions, widely established in sectors such as financial services and cybersecurity, could be adapted for the unique challenges posed by frontier AI systems. Unlike traditional model evaluations or security certifications, internal audits can address operational and organisational risks such as the internal use of unreleased frontier models, insider misuse of privileged access and inadequate monitoring of unsafe behaviour.

The research project builds on previous work from Dr Jonas Freund and Aidan Homewood, and examines four key questions: what risks should internal audit address, how should companies source these functions, how frequently should audits occur, and what information should auditors access.

This work comes at a critical time as AI governance frameworks evolve rapidly. The EU AI Act includes provisions for systemic risk assurance, while major AI developers have made voluntary commitments to enhanced safety measures. Internal audit represents a potentially valuable tool for providing assurance to stakeholders within an organisation by ensuring that the Board of Directors and other key decision-makers have a more accurate understanding of risk levels and risk management practices.

The Centre for Legal Technology's involvement in this project reflects its commitment to addressing the legal dimensions of emerging technologies, demonstrating how legal expertise can contribute to practical governance solutions for frontier AI development.

The AI Governance Taskforce aims to publish open-access pre-prints of its research findings in December 2025.

To learn more about future AI Governance Taskforce cohorts, please visit the Arcadia Impact website or contact Taskforce Lead Ben R Smith at ben@arcadiaimpact.org.