<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Everything I Learned Training Frontier Small Models — Maxime Labonne, Liquid AI</title>
        <link>https://video.ut0pia.org/videos/watch/6c1e438e-b987-4ad9-b009-4cd02870b66a</link>
        <description>A new class of small models is emerging with the ability to reliably follow instructions and call tools while running on-device under 1 GB of memory. In this talk, we'll break down how to post-train frontier small models using the LFM2.5 recipe: on-policy preference alignment, agentic reinforcement learning, and curriculum training with iterative model merging. We'll cover training challenges unique to the 1B scale, like doom loops, capability interference, and how to fix them. The goal is to give you a concrete playbook to fine-tune and deploy small models for your own use cases, from structured data extraction to multi-turn tool use. Speaker info: https://x.com/maximelabonne, https://www.linkedin.com/in/maxime-labonne/, https://github.com/mlabonne</description>
        <lastBuildDate>Wed, 29 Apr 2026 19:36:47 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>PeerTube - https://video.ut0pia.org</generator>
        
        <copyright>All rights reserved, unless otherwise specified in the terms specified at https://video.ut0pia.org/about and potential licenses granted by each content's rightholder.</copyright>
        <atom:link href="https://video.ut0pia.org/feeds/video-comments.xml?videoId=6c1e438e-b987-4ad9-b009-4cd02870b66a" rel="self" type="application/rss+xml"/>
    </channel>
</rss>