Learning Resources

Curated courses, tutorials, books, and videos to accelerate your AI journey.

Podcast Free

80,000 Hours Podcast — AI Safety Episodes

Other · 80,000 Hours

The 80,000 Hours Podcast, focused on high-impact career decisions, has produced the most substantive long-form interview series on AI safety available anywhere. Episodes with Paul Christiano (alignment research), Dario Amodei (Anthropic), Stuart Russell (CHAI), Eliezer Yudkowsky (MIRI), and dozens of other researchers cover technical alignment, governance, and the landscape of AI safety work in detail. Individual episodes run three to five hours and are among the most information-dense public resources on AI risk. AICI recommends the episodes with Paul Christiano on reward modelling and with Daniel Kokotajlo (former OpenAI) on AI timelines as essential listening for anyone working on governance of frontier AI systems.

Advanced · 3–5 hours each
Article Free

Foundation Models: Opportunities, Risks and Mitigations

Other · Ada Lovelace Institute

The Ada Lovelace Institute's 2023 report on foundation models examines the regulatory gap between existing EU and UK frameworks and the capabilities and deployment patterns of large foundation models. It argues that the EU AI Act's risk-based approach was designed before foundation models became dominant and requires adaptation. The report identifies specific governance challenges — accountability gaps when foundation models are deployed through APIs by third parties, difficulty of pre-deployment risk assessment for general-purpose systems, and the challenge of monitoring foundation model behaviour across diverse downstream applications. AICI considers it the most practically useful analysis of the governance challenges specific to foundation models.

Intermediate
Article Free

EU AI Act — Official Legislative Text (EUR-Lex)

Other · European Union

The full text of Regulation (EU) 2024/1689 — the Artificial Intelligence Act — as published in the Official Journal of the European Union. AICI recommends reading the regulation directly rather than through summaries. Summaries flatten complexity and often omit the definitions, recitals, and conditional clauses that determine whether a specific AI system is in scope and what obligations apply. Reading Article 3 (definitions) and Annex I (high-risk AI system list) carefully before relying on any compliance guidance is non-negotiable. The regulation entered force on 1 August 2024.

Advanced
Article Free

NIST AI Risk Management Framework Playbook

Other · NIST

The companion document to the NIST AI RMF, providing concrete suggested actions for implementing each of the framework's four functions (Govern, Map, Measure, Manage) across the AI lifecycle. Where the AI RMF is conceptual, the Playbook is operational — it gives practitioners specific things to do. It is freely available and regularly updated. AICI uses the Playbook as a reference framework in professional development programmes. Its value is not as a compliance checklist but as a structured approach to building internal AI governance capacity that can be adapted to different organisational contexts and regulatory obligations.

Intermediate
Tool Free

AI Incident Database

Other · Partnership on AI

The AI Incident Database, maintained by the Partnership on AI, is a repository of documented cases where AI systems caused harm or behaved in unexpected ways in real-world deployments. As of 2024 it contains over 700 incidents. The database is searchable by harm type, affected population, deployer, and AI system type. AICI regards it as an essential reference for AI risk assessment — not because every incident is equally serious, but because the database provides the empirical foundation for claims about AI risk that would otherwise rely on hypothetical scenarios. Before conducting an AI risk assessment, review the incident database for the deployment context you are assessing.

Beginner
Tool Free

Ollama — Run Large Language Models Locally

Other · Ollama

Ollama is an open-source tool for running large language models on local hardware — Mac, Linux, and Windows. It handles model downloading, quantisation, and serving through a simple CLI and API compatible with the OpenAI API format. With Ollama, a practitioner can run Llama 3.3, Mistral, Phi-4, or dozens of other models locally without cloud API costs or data leaving the device. AICI recommends Ollama as the most accessible way for AI governance practitioners to develop firsthand understanding of how language models behave — including their failure modes, their sensitivity to prompting, and the gap between benchmark performance and real-world utility. You cannot govern what you have not used.

Beginner
Video Free

Building makemore: Neural Networks for NLP — Andrej Karpathy

YouTube · Andrej Karpathy

Andrej Karpathy's lecture series builds neural language models from scratch in Python — starting from bigram character models and progressing through multilayer perceptrons, batch normalisation, and WaveNet-style architectures. The series culminates in a minimal GPT implementation (nanoGPT) built from first principles. Karpathy was previously Director of AI at Tesla and a founding member of OpenAI; his ability to explain complex concepts with working code at each step is exceptional. For governance professionals: understanding what a language model actually is — as code, not metaphor — changes how you read regulation and compliance documents that describe these systems.

Advanced · ~10 hours
Video Free

Attention in transformers, visually explained | Chapter 6, Deep Learning

YouTube · 3Blue1Brown

3Blue1Brown's visual explanation of the attention mechanism in transformer neural networks. This video covers query, key, and value matrices, how attention scores are computed, multi-head attention, and the intuition behind why attention enables transformers to model long-range dependencies. It is the clearest visual explanation of the core mechanism underlying every major language model. AICI recommends it to anyone who encounters "attention" in AI documentation and wants to understand what it actually means — not as metaphor, but as computation.

Intermediate · 26 minutes
Course Free

Practical Deep Learning for Coders

Other · Jeremy Howard, Rachel Thomas

Jeremy Howard and Rachel Thomas's course at fast.ai takes a top-down, practical approach to deep learning: start with working code that does useful things, then learn the theory that explains why it works. The full course (Part 1 and Part 2) covers neural networks from basics through to diffusion models and large language models. The pedagogical approach is unusual and effective for practitioners with programming experience but limited mathematics. AICI recommends this as the most accessible rigorous introduction to deep learning for governance professionals who need genuine technical grounding.

Intermediate · ~30 hours
Course Free

Short Courses — DeepLearning.AI

Other · DeepLearning.AI

DeepLearning.AI's short course catalogue covers practical AI development topics in one-to-two-hour modules: prompt engineering, LangChain, building with LLM APIs, AI agents, RAG systems, and more. The courses are taught by practitioners including Andrew Ng and leading researchers from Anthropic, OpenAI, and Google. They are free to audit. AICI recommends them for technical practitioners who need to understand how AI systems are built from the inside — which is prerequisite to governing them from the outside. Understanding what a vector database is, or how RAG introduces failure modes, or what function calling means for AI agency, is increasingly necessary for serious AI governance work.

Intermediate · 1–2 hours each
Book

The Tech Coup: How to Save Democracy from Silicon Valley

Other · Marietje Schaake

Marietje Schaake's 2024 book argues that the concentration of AI and platform power in a small number of US technology companies represents a structural threat to democratic governance — not a future risk, but a present condition. Drawing on her decade as a European Parliament member working on digital policy, she makes the case for why voluntary commitments and self-regulation have failed, what effective binding regulation looks like, and why Europe's approach to AI governance — despite its imperfections — is closer to the right answer than the US hands-off model. Essential reading for anyone working on EU AI Act compliance who wants to understand the political economy behind the regulation.

Intermediate
Book

Unmasking AI: My Mission to Protect What Is Human in a World of Machines

Other · Joy Buolamwini

Joy Buolamwini's 2023 book is part memoir, part technical analysis, part manifesto. Starting from her discovery that facial recognition systems failed to detect her face until she wore a white mask, she traces how AI bias is created, how it is obscured, and what it takes to contest it. The book covers Gender Shades, the regulatory debates that followed, and the broader question of who AI is built for. Buolamwini's writing is accessible without being simplified — she explains the technical mechanisms of bias clearly while maintaining the human stakes at the centre.

Beginner
Book

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

Other · Kate Crawford

Kate Crawford's 2021 book examines AI as a material system rather than a software abstraction — tracing the physical infrastructure of AI from lithium mines in Nevada to Amazon fulfilment centres to facial recognition deployments in schools. The argument is that AI is not an intangible technology but one built on extraction: of minerals, of labour, of data, of attention. For AI governance practitioners, this book provides the structural framing that most compliance frameworks omit: who bears the costs of AI development and who captures the benefits are governance questions as much as technical ones.

Intermediate
Book

Human Compatible: Artificial Intelligence and the Problem of Control

Other · Stuart Russell

Stuart Russell's 2019 book is the clearest technical statement of why AI alignment is a genuine research problem, written by the author of the standard AI textbook. Russell argues that the conventional model of AI — specify an objective, build a system that maximises it — is fundamentally broken because it is impossible to fully specify human values in a machine-readable form. His proposed alternative: AI systems designed to be uncertain about human preferences and to seek human input rather than act autonomously on assumed objectives. Required reading before engaging with EU AI Act human oversight requirements — the regulation's language reflects precisely the concerns Russell articulates.

Intermediate
enes