In this video, you'll learn how to improve the performance of your Large Language Model using 3 incredibly easy-to-use methods! We will use Llama 2 as the open-source LLM to improve upon with extensive tutorials.
Timeline
0:00 Introduction
0:32 Loading Llama 2
4:48 Prompt Template
6:27 Method 1: Prompt Engineering
12:00 Method 2: Retrieval-Augmented Generation (RAG)
18:10 Method 3: Parameter Efficient-tuning with LoRA
24:57 Combining all 3 methods
📒 Google Colab notebook https://colab.research.google.com/drive/1jXfyvoS8KY1sa9lsJz8cAptKTe-RZLjv?usp=sharing
🛠️ Written version of this tutorial https://maartengrootendorst.substack.com/p/3-ways-to-improve-your-llm
🦙 Llama 2 model (7B) on HuggingFace https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
Support my work:
👪 Join as a Channel Member:
/ @maartengrootendorst
✉️ Newsletter https://maartengrootendorst.substack.com/
📖 Join Medium to Read my Blogs https://medium.com/@maartengrootendorst
I'm writing a book!
📚 Hands-On Large Language Models https://www.oreilly.com/library/view/hands-on-large-language/9781098150952/
#datascience #machinelearning #ai