Office Hours

Live sessions, documented for the cohort

Summaries and expanded write-ups from First Break AI office hours. Each session covers topics from the roadmap with live Q&A, debugging, and discussion.

Office Hours are live cohort sessions where we go deeper on roadmap topics, debug together, and discuss what learners are working on. Each session is documented here as an expanded write-up — not just meeting notes, but context and explanations you can reference later.

To join office hours: join the Fetchlens.ai Discord server — that’s where we share the meeting link, time, and reminders.

ImportantAI-generated content notice

Office hours transcripts are used to create AI-generated podcast lessons and other learning resources published on this site, including The Journey. All learner names are anonymized — no real names are used. Voices and conversation in podcast episodes are entirely AI-generated. By participating in office hours, you consent to your contributions being used in this way.

Sessions

The sessions below are pre-cohort (the cohort starts 1 May 2026). They’re kept here as reference; the same roadmap topics apply when you follow along later.

Pre-cohort · Office Hours

13 March 2026

GitHub collaboration (PRs, conflicts, rebasing), Qwen3 inference concepts (temperature, chat templates, speculative decoding, GGUF vs SafeTensors, quantization/precision), cohort-based learning, and Unsloth efficiency.

Read session notes

Pre-cohort · Office Hours

27 March 2026

Transformer architecture deep dive — why attention replaced LSTMs, self-attention and multi-head attention, decoder-only LLMs, dense vs MoE models, benchmarking, the three pillars of model development, data parallelism (DDP), and Project Watch: Speedrun and Auto Research GPT.

Read session notes

Pre-cohort · Office Hours

10 April 2026

Learner LAN stock game (scope and shipping), Claude Code harness leak and safe learning, KV cache caveats, benchmark literacy, Gemma / Matryoshka, nanoGPT speedrun and Tyler Romero’s worklog, DDP and all-reduce, NVLink and DeepSeek-style optimization, Muon optimizer, RoPE/RoFormer in code, and walking through a distributed training script.

Read session notes