Learning Resources
Curated courses, tutorials, books, and videos to accelerate your AI journey.
Building makemore: Neural Networks for NLP — Andrej Karpathy
YouTube · Andrej Karpathy
Andrej Karpathy's lecture series builds neural language models from scratch in Python — starting from bigram character models and progressing through multilayer perceptrons, batch normalisation, and WaveNet-style architectures. The series culminates in a minimal GPT implementation (nanoGPT) built from first principles. Karpathy was previously Director of AI at Tesla and a founding member of OpenAI; his ability to explain complex concepts with working code at each step is exceptional. For governance professionals: understanding what a language model actually is — as code, not metaphor — changes how you read regulation and compliance documents that describe these systems.
Attention in transformers, visually explained | Chapter 6, Deep Learning
YouTube · 3Blue1Brown
3Blue1Brown's visual explanation of the attention mechanism in transformer neural networks. This video covers query, key, and value matrices, how attention scores are computed, multi-head attention, and the intuition behind why attention enables transformers to model long-range dependencies. It is the clearest visual explanation of the core mechanism underlying every major language model. AICI recommends it to anyone who encounters "attention" in AI documentation and wants to understand what it actually means — not as metaphor, but as computation.