Tutorial Link: https://docs.dagster.io/examples/llm-fine-tuning/
Join Alex Noonan, a developer advocate at Dagster Labs, as he demonstrates the process of fine-tuning OpenAI models using Dagster. In this comprehensive tutorial, Alex explains the benefits of fine-tuning, such as customization, cost efficiency, and output control. He outlines the steps involved, from data collection and preparation to training, evaluation, and deployment. Utilizing the Goodreads dataset, Alex walks through the creation of assets, feature engineering, and validation checks. The tutorial also covers the integration of tools for improved monitoring, observability, and testing. By the end, Alex showcases a fine-tuned model with improved accuracy, providing practical insights and hands-on guidance. Follow along with the project link in the description and make sure to subscribe for more updates.
00:00 Introduction to Fine Tuning with Dagster
00:22 Benefits of Fine Tuning
00:53 Fine Tuning Process Overview
01:14 Using Dagster for Fine Tuning
02:19 Project Setup and Data Ingestion
03:43 Feature Engineering
05:31 Creating Training and Validation Files
07:17 Uploading and Fine Tuning the Model
08:23 Evaluating the Fine Tuned Model
10:37 Building Definitions and Running the Pipeline
11:16 Materializing Assets and Final Results
14:25 Conclusion and Next Steps