AI Governance Organisations
The organisations shaping AI governance worldwide — regulatory bodies, standards authorities, research institutions, and civil society.
EU AI Office
European Union
The central EU body responsible for overseeing the implementation of the EU AI Act, established within the European Commission. It coordinates enforcement among national market surveillance authorities, oversees general-purpose AI model regulation, develops standards references, and maintains the EU database of high-risk AI systems. The AI Office is the primary point of engagement for organisations seeking clarity on compliance obligations under the Act.
European Data Protection Board
European Union
The independent body ensuring consistent application of GDPR across EU member states. The EDPB issues binding decisions in cross-border cases, adopts guidelines on GDPR interpretation, and coordinates enforcement among national supervisory authorities. Its opinions on AI — including on facial recognition, AI in employment, and AI in healthcare — have direct compliance implications for any AI system processing personal data of EU data subjects.
Ofcom
United Kingdom
The UK's communications regulator, designated as the lead regulator under the Online Safety Act 2023. Ofcom's responsibilities now extend to algorithmic accountability for online platforms, including obligations on recommender systems. It is developing codes of practice that will set binding standards for how platforms design and deploy algorithms affecting users. AICI monitors Ofcom's AI-related guidance as a reference for how sector regulators can operationalise AI governance principles.
Information Commissioner's Office
United Kingdom
The UK's data protection authority. The ICO has published extensive guidance on AI, including its AI and Data Protection Risk Toolkit, guidance on explaining AI decisions under the UK GDPR, and its position on web scraping for AI training. As the UK's primary AI accountability regulator in the data domain, its enforcement decisions shape how organisations approach automated decision-making, data minimisation in model training, and data subject rights in AI contexts.
ISO/IEC JTC 1/SC 42 — Artificial Intelligence
International
The ISO/IEC joint technical subcommittee responsible for international AI standardisation. SC 42 developed ISO/IEC 42001 (AI management systems), ISO/IEC 23894 (AI risk management guidance), ISO/IEC 24028 (trustworthiness in AI), and is developing standards on bias, transparency, explainability, and AI system testing. Its work programme defines the technical vocabulary and requirements that underpin certification schemes globally. Participation is open to national standards bodies.
IEEE Standards Association
International
The IEEE Standards Association develops industry standards through an open process. Its AI-related work includes IEEE P7000 series addressing ethical concerns in system design, IEEE 7001 on transparency, IEEE 7010 on wellbeing metrics, and Std 2857 on privacy engineering. The IEEE CertifAIEd programme offers conformity assessment against ethically aligned design criteria. AICI considers IEEE standards particularly relevant for technical teams embedding ethics into development processes, as opposed to compliance teams managing regulatory obligations.
National Institute of Standards and Technology
United States
The US federal agency responsible for developing technology standards and measurement science. NIST's AI Safety Institute (AISI), established under the 2023 Executive Order, leads US government AI safety research. NIST developed the AI Risk Management Framework (AI RMF 1.0), the AI RMF Playbook, and is developing a companion Generative AI Profile. Its work is technically rigorous and widely referenced — including by the EU AI Act's standardisation mandate — making NIST a genuinely international influence despite being a US national body.
Organisation for Economic Co-operation and Development
International
The OECD hosts the AI Policy Observatory (OECD.AI), the world's most comprehensive repository of national AI policies, strategies, and regulatory developments. Its 2019 AI Principles — endorsed by G20 — remain the foundational international reference for AI governance. The OECD monitors and classifies AI governance measures across 70+ countries, making it the definitive source for comparative analysis of how different jurisdictions are regulating AI. AICI uses OECD data extensively in its intelligence work.
UN Secretary-General's Advisory Body on AI
International
Established by UN Secretary-General António Guterres in October 2023, the advisory body published its final report "Governing AI for Humanity" in September 2024. It recommended establishing an International Panel on AI (modelled partly on the IPCC), an AI Office within the UN Secretariat, and a Global Fund for AI capacity development. The report acknowledges that existing international institutions were not designed for AI governance and proposes new institutional architecture. Whether these recommendations translate into action depends on member state will.
Global Partnership on Artificial Intelligence
International
GPAI is a multi-stakeholder initiative bringing together governments, international organisations, civil society, and researchers to guide the responsible development and use of AI. Founded in 2020, it has 29 member countries and operates working groups on data governance, the future of work, responsible AI, and AI commercialisation. GPAI's technical work is conducted by two expert centres — at CIFAR in Montreal and at the OECD in Paris. It merged its activities with the OECD's AI work programme in 2024.
UK AI Safety Institute
United Kingdom
Established in November 2023 at Bletchley Park during the UK's AI Safety Summit, the AISI is the world's first government body dedicated to AI safety research and evaluation. It has developed evaluation frameworks for advanced AI models, conducted testing of frontier models from major labs, and is building international partnerships with counterpart bodies in the US, EU, Japan, and elsewhere. AICI tracks AISI's published evaluations as primary source material on the technical safety characteristics of deployed AI systems.
US AI Safety Institute
United States
Established within NIST under the October 2023 Executive Order, the US AISI conducts safety evaluations of AI models, develops measurement science for AI safety, and coordinates with international counterparts. It signed a Memorandum of Understanding with the UK AISI in April 2024 — the first formal bilateral AI safety agreement. Its test and evaluation frameworks for generative AI are among the most technically detailed government-produced safety references available. AICI monitors its output as authoritative on frontier model risk.
Alan Turing Institute
United Kingdom
The UK's national institute for data science and artificial intelligence, established in 2015. The Turing produced the influential "Understanding Artificial Intelligence Ethics and Safety" guide and the AI Ethics and Governance Body of Knowledge. Its Centre for Emerging Technology and Security (CETaS) focuses on national security implications of AI. The institute convenes researchers across 13 university partners and publishes widely on AI fairness, accountability, transparency, and safety — work AICI considers essential reading for practitioners.
AI Now Institute
United States
An independent research institute studying the social implications of AI, founded in 2017 and based at New York University. AI Now's annual reports have consistently identified structural problems in AI governance that industry and governments have later had to address — including algorithmic discrimination in public services, labour impacts of AI, and the concentration of AI power. It is one of the few research institutions that explicitly takes a structural critique position rather than a technical solutionism approach. AICI considers this perspective essential to honest AI governance work.
Ada Lovelace Institute
United Kingdom
An independent research and deliberative body with a mission to ensure data and AI work for people and society, established by the Nuffield Foundation in 2018. The Ada Lovelace Institute has published formative work on facial recognition, algorithmic accountability, foundation models, and the regulatory gap analysis for AI in the UK. Its approach combines technical analysis with public deliberation and legal research. AICI regards its reports as among the most practically useful for organisations designing governance structures.
Partnership on AI
International
A multi-stakeholder non-profit founded in 2016 by Amazon, Apple, DeepMind, Facebook, Google, IBM, and Microsoft, later expanded to include civil society and academic partners. Partnership on AI has published frameworks on AI incident reporting, responsible publication norms for AI research, and the use of AI in hiring. It hosts the AI Incident Database. AICI notes that industry-led bodies serve a different function from independent research institutions — their work is useful but must be read with awareness of whose interests shape the agenda.
AlgorithmWatch
European Union
A non-profit research and advocacy organisation based in Berlin, focused on the social and political implications of automated decision-making. AlgorithmWatch monitors how algorithms affect people's lives, documents cases of algorithmic harm, and engages in EU and national policy processes. Its ADM Watch project tracked the deployment of automated decision-making systems across Europe. It has been a consistent voice pushing for stronger accountability provisions in the EU AI Act — often pointing to gaps between the regulation's ambitions and its actual enforcement mechanisms.
Access Now
International
An international civil society organisation defending digital rights of people around the world. Access Now has been active in EU AI Act negotiations, consistently advocating for prohibition of mass surveillance AI uses and stronger protections against discriminatory AI systems. Its #BanTheScan campaign challenged biometric mass surveillance in public spaces. AICI considers civil society organisations like Access Now essential to the AI governance ecosystem — they represent interests that industry and government bodies structurally tend to underweight.
Council of Europe — Committee on Artificial Intelligence
Council of Europe
The Council of Europe's Committee on Artificial Intelligence (CAI) drafted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law — opened for signature in September 2024. Unlike EU instruments, this treaty is open to non-EU countries including the United States, Canada, Israel, Japan, and Australia, making it potentially the broadest legally binding international AI instrument. It focuses on protecting human rights, democracy, and the rule of law rather than technical risk classification. AICI regards this as a significant development for organisations operating globally.