Fine-Tuning Large Language Models: From Theory to Practice
Schedule
Tue Mar 03 2026 at 02:00 pm to 04:00 pm
UTC-06:00Location
John Crerar Library - Kathleen A. Zar Room | Chicago, IL
About this Event
Pre-trained large language models demonstrate remarkable general capabilities, but achieving consistent domain-specific behavior, specialized output formats, or deeply embedded knowledge requires moving beyond prompting. Fine-tuning adapts these foundation models to specific tasks, yet the computational demands of updating billions of parameters have historically placed this technique out of reach for most practitioners. This workshop bridges theory and practice, guiding participants from the fundamentals of LLM adaptation through modern parameter-efficient methods that make fine-tuning accessible on consumer hardware. We examine the LIMA hypothesis on data quality, numerical precision trade-offs (FP32, FP16, BF16), and the mechanics of Low-Rank Adaptation (LoRA) and QLoRA. Live-coding sessions will implement a complete fine-tuning pipeline using Hugging Face Transformers and PEFT, covering data preparation, hyperparameter selection, and the critical distinction between training loss and actual model quality. Participants will also learn when fine-tuning is the right approach versus prompting or retrieval-augmented generation. Attendees will leave with a working fine-tuned model, practical debugging strategies, and a decision framework for production deployment.
Learning Objectives
- Explain when fine-tuning is appropriate versus prompt engineering or retrieval-augmented generation.
- Analyze numerical precision formats and their impact on memory requirements and training stability.
- Implement a LoRA-based fine-tuning pipeline with proper data formatting, hyperparameter configuration, and checkpointing.
- Diagnose common failure modes including overfitting, catastrophic forgetting, and mode collapse through validation metrics.
- Evaluate fine-tuned model quality using validation loss, benchmark performance, and qualitative assessment.
Level: Intermediate
Prerequisites: Basic Python programming, familiarity with PyTorch or TensorFlow, and conceptual understanding of transformer architectures.
Where is it happening?
John Crerar Library - Kathleen A. Zar Room, 5730 South Ellis Avenue, Chicago, United StatesEvent Location & Nearby Stays:
USD 0.00







![Advanced Garment Creation 101 [March Session]](https://cdn-ip.allevents.in/s/rs:fill:500:250/g:sm/sh:100/aHR0cHM6Ly9jZG4tYXouYWxsZXZlbnRzLmluL2V2ZW50czcvYmFubmVycy8yMGE2YTgzY2VjNmNkMTZlNTNlOGFmMTk4YzFhM2QxNjFmNDNkOTI2ZmYwZmJlYjg3MDBmNGFkNjczYjMwNmI0LXJpbWctdzEyMDAtaDYwMy1kYzAwMDAwMC1nbWlyLmpwZz92PTE3NTM2OTM0Nzc.avif)



