Subscribe to MLExpert Pro for live "AI Engineering" boot camp sessions (07-09 Feb) https://www.mlexpert.io/
Are you happy with your Large Language Model (LLM) performance on a specific task? If not, fine-tuning might be the answer. Even a simpler, smaller model can outperform a larger one if it's fine-tuned correctly for a specific task. In this video, you'll learn how to fine-tune Llama 3 on a custom dataset.
Model on HF: https://huggingface.co/curiousily/Llama-3-8B-Instruct-Finance-RAG
Philipp Schmid Post: https://www.philschmid.de/fine-tune-llms-in-2024-with-trl
AI Bootcamp: https://www.mlexpert.io/
LinkedIn: https://www.linkedin.com/in/venelin-valkov/
Follow me on X: https://twitter.com/venelin_valkov
Discord: https://discord.gg/UaNPxVD6tv
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/AI-Bootcamp
👍 Don't Forget to Like, Comment, and Subscribe for More Tutorials!
00:00 - Why fine-tuning?
00:25 - Text tutorial on MLExpert.io
00:53 - Fine-tuning process overview
02:19 - Dataset
02:56 - Lllama 3 8B Instruct
03:53 - Google Colab Setup
05:30 - Loading model and tokenizer
08:18 - Create custom dataset
14:30 - Establish baseline
17:37 - Training on completions
19:04 - LoRA setup
22:25 - Training
26:42 - Load model and push to HuggingFace hub
28:43 - Evaluation (comparing vs the base model)
32:50 - Conclusion
Join this channel to get access to the perks and support my work:
https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join
#llama3 #llm #rag #finetuning #promptengineering #chatgpt #chatbot #langchain #gpt4