AI engineering with open access LLMs that lie, curse, and steal - Daniel Whitenack | Craft 2024
If you are one of the 46% of AI Engineers preferring open source LLMs going in 2024, you might have discovered that these open models can be a bit like unruly children. There are moments of joy, when they are behaving appropriately, and then there are moments of horror, when they are lying (hallucinating), stealing (enabling privacy and security breaches), and/or generally behaving in ways that harm others (e.g., spewing out toxic statements). In this talk, I will share some stories from those working in the trenches to reign in private model deployments of open access models. I’ll share an overview of the most impactful “vectors of attack/harm” associated with local, private models, so that you can categorize and understand when and how things like hallucinations, prompt injections, privacy breaches, and toxic outputs occur. Then I will share some practical tips (with live demos) to give you the skills you need to control your LLM apps via model-graded outputs, LLM critics, control vectors, hybrid NLP and GenAI systems, and curated domain-specific examples.
This talk was recorded at Craft Conference 2024.
The event was organized by CraftHub.
You can watch the rest of the conference talks on our channel.
If you are interested in more speakers, tickets and details of the conference, check out our website: https://craft-conf.com/
If you are interested in more events from our company: https://crafthub.events/