Building LLMs from the Ground Up: A 3-hour Coding Workshop

Building LLMs from the Ground Up: A 3-hour Coding Workshop

106.969 Lượt nghe
Building LLMs from the Ground Up: A 3-hour Coding Workshop
REFERENCES: 1. Build an LLM from Scratch book: https://amzn.to/4fqvn0D 2. Build an LLM from Scratch repo: https://github.com/rasbt/LLMs-from-scratch 3. GitHub repository with workshop code: https://github.com/rasbt/LLM-workshop-2024 4. Lightning Studio for this workshop: https://lightning.ai/lightning-ai/studios/llms-from-the-ground-up-workshop?view=public 5. LitGPT: https://github.com/Lightning-AI/litgpt DESCRIPTION: This tutorial is aimed at coders interested in understanding the building blocks of large language models (LLMs), how LLMs work, and how to code them from the ground up in PyTorch. We will kick off this tutorial with an introduction to LLMs, recent milestones, and their use cases. Then, we will code a small GPT-like LLM, including its data input pipeline, core architecture components, and pretraining code ourselves. After understanding how everything fits together and how to pretrain an LLM, we will learn how to load pretrained weights and finetune LLMs using open-source libraries. --- To support this channel, please consider purchasing a copy of my books: https://sebastianraschka.com/books/ --- https://twitter.com/rasbt https://linkedin.com/in/sebastianraschka/ https://magazine.sebastianraschka.com --- OUTLINE: 0:00 – Workshop overview 2:17 – Part 1: Intro to LLMs 9:14 – Workshop materials 10:48 – Part 2: Understanding LLM input data 23:25 – A simple tokenizer class 41:03 – Part 3: Coding an LLM architecture 45:01 – GPT-2 and Llama 2 1:07:11 – Part 4: Pretraining 1:29:37 – Part 5.1: Loading pretrained weights 1:45:12 – Part 5.2: Pretrained weights via LitGPT 1:53:09 – Part 6.1: Instruction finetuning 2:08:21 – Part 6.2: Instruction finetuning via LitGPT 02:26:45 – Part 6.3: Benchmark evaluation 02:36:55 – Part 6.4: Evaluating conversational performance 02:42:40 – Conclusion