Afleveringen
-
We talk about DeepSeek-R1, a novel language model with enhanced reasoning capabilities achieved through reinforcement learning (RL). The researchers explored training methodologies, including DeepSeek-R1-Zero which uniquely utilizes large-scale RL without initial supervised fine-tuning (SFT), demonstrating emergent reasoning behaviors. To improve readability and further boost performance, DeepSeek-R1 incorporates a multi-stage training process with cold-start data before RL and achieves results comparable to OpenAI's o1-1217 on reasoning tasks. Furthermore, the paper discusses the distillation of DeepSeek-R1's reasoning abilities into smaller, more efficient models, showcasing their strong performance on various benchmarks.
-
In this first episode we dive into this paper from AnthropicAI called Biology of a Large Langage Model where the autors present a detailed investigation into the inner workings of the large language model Claude 3.5 Haiku, employing a methodology centered around attribution graphs to understand how it processes information and generates responses. Through various case studies, the authors explore phenomena such as multi-step reasoning, planning in poetry generation, and multilingual understanding, uncovering specific circuit components and their functions. The research also examines the model's ability to handle harmful requests, its tendencies toward hallucination, and the faithfulness of its chain-of-thought reasoning. Ultimately, this work aims to reverse engineer the mechanisms within advanced language models to improve our understanding and assess their capabilities, while also acknowledging the limitations of current interpretability methods.
Here is the full paper:
https://transformer-circuits.pub/2025/attribution-graphs/biology.html
-
Zijn er afleveringen die ontbreken?