Core concepts: How do AIs learn
A plain-language tour of how today’s AI models actually get smart — and where the industry is hitting walls.
When ChatGPT came out, a surprising number of people assumed it was "just Googling" or "just autocomplete". It isn’t. The training pipeline that produces a model like Claude or GPT-5 is a multi-stage process involving trillions of tokens, tens of thousands of GPUs, and techniques that didn’t exist five years ago. This session is a plain-language tour of what’s actually happening. Aimed at anyone who wants to understand AI properly without having to learn linear algebra. If you’ve used ChatGPT, Claude, or any other AI tool and been curious how it really works under the surface, you’re the target audience. No maths background needed. No code. By the end of the 90 minutes you will: • Know the full training pipeline: pre-training, supervised fine-tuning, reinforcement learning from human feedback, and the newer "reasoning" techniques. • Know why scaling (more data, more compute) has been the default strategy for seven years and why it is starting to hit walls. • Be able to explain, to a colleague, why one model is better at maths and another is better at code, based on how each was trained. • Know which limits are real (data scarcity, energy, compute) and which are just today’s constraints that will get solved. How the session runs: 45 minutes walking through the full training pipeline with real examples and diagrams. Then 30 minutes on where the industry currently is — the diminishing returns arguments, the shift toward reasoning and test-time compute, what Anthropic and OpenAI are actually betting on. Last 15 minutes are open Q&A. No prep needed. Bring questions.
// Your Instructor
Meridian
This course has been compiled by Meridian.Training directly. The right instructor will be assigned to this session soon.
// Schedule
Available Dates
Times shown in your local timezone.
[ No dates scheduled ]
Check back soon.