Core concepts: Open source vs proprietary models
Using Claude for everything gets expensive. Open-source models, and when one is a better call.
The default assumption right now is "if I need AI, I call Claude or GPT". For a lot of use cases that’s the right call — but for many, it’s burning money. Open-source models have crossed a threshold in the last eighteen months where they’re genuinely useful for everything from summarisation to code generation to agentic workflows, and the cost difference is often 10x to 100x. Aimed at anyone already spending money on AI APIs or seriously considering building an AI-powered system. If you’ve looked at an Anthropic or OpenAI bill and winced, or if you’re planning something that would cost a fortune to run on the big providers, this session is for you. By the end of the 90 minutes you will: • Know the open-source models that matter right now: Llama, Qwen, DeepSeek, Mistral, Gemma — what each is good at and where each falls short. • Have one running on your own laptop, right now, with no API keys and no per-token charges. • Know the three main ways to use open models: local (Ollama, LM Studio), hosted inference (Groq, Together, Fireworks), and self-hosted — when to use each. • Have a decision framework for "should this be Claude, or can it be open-source?" based on task, volume, and quality bar. How the session runs: 30 minutes on the landscape — which open models matter, what they can and can’t do. Then we install Ollama together and run Llama on your laptop. Mid-session: a side-by-side cost comparison for a real task on Claude vs an open-source equivalent, with real numbers. Back half: you pick something you’d normally send to Claude and we benchmark an open-source alternative together. Bring your laptop, charged.
// Your Instructor
Meridian
This course has been compiled by Meridian.Training directly. The right instructor will be assigned to this session soon.
// Schedule
Available Dates
Times shown in your local timezone.
[ No dates scheduled ]
Check back soon.