Open Source RAG running LLMs locally with Ollama

Open Source RAG running LLMs locally with Ollama

32.981 Lượt nghe
Open Source RAG running LLMs locally with Ollama
We've taken Verba, our open-source Retrieval Augmented Generation (RAG) app, to the next level with the newly released version 1.0. Run LLMs fully free and open with the new Ollama model integration, completely customize the UI, and enjoy many other feature updates like cached conversation, better logging and full transparency. Philip, Edward, and Victoria will take you through all the updates, show you how to set it up locally and help you get started using advanced RAG on your own data in just a couple minutes. GitHub: https://github.com/weaviate/verba Demo: https://verba.weaviate.io Join Leonie, Femke, Philip, Edward, Ajit and Victoria at Weaviate - we're hiring: https://weaviate.io/company/careers And remember - all your vector embeddings are belong to you: https://www.youtube.com/watch?v=Qra1oWdJQPs Thanks to the Merantix AI Campus for hosting us in Berlin! https://www.merantix-aicampus.com/ ▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT WITH US ▬▬▬▬▬▬▬▬▬▬▬▬ Visit weaviate.io Star us on GitHub https://github.com/weaviate/weaviate Stay updated and subscribe to our newsletter: https://newsletter.weaviate.io/ Try out Weaviate Cloud Services for free here: https://console.weaviate.cloud/ Got a questions? Forum: https://forum.weaviate.io/ Slack: https://weaviate.io/slack Connect with us on Twitter: https://twitter.com/weaviate_io LinkedIn: https://www.linkedin.com/company/weaviate-io/ Music: Nebulosity - Brendon Moeller Aubergene - Brendon Moeller 00:00 - Introduction 01:17 - Ollama Integration 01:53 - Customization 03:14 - RAG in Healthcare 04:54 - RAG in Weaviate 06:20 - Installation 08:54 - Outro 09:19 - Easter Egg