How NOT to Train Your Hack Bot: Dos and Don'ts of Building Offensive GPTs

How NOT to Train Your Hack Bot: Dos and Don'ts of Building Offensive GPTs

4.636 Lượt nghe
How NOT to Train Your Hack Bot: Dos and Don'ts of Building Offensive GPTs
No doubt everybody is curious if you can use large language models (LLMs) for offensive security operations. In this talk, we will demonstrate how you can and can't use LLMs like GPT4 to find security vulnerabilities in applications, and discuss in detail the promise and limitations of using LLMs this way. We will go deep on how LLMs work and share state-of-the-art techniques for using them in offensive contexts. By: Shane Caldwell , Ariel Herbert-Voss Full Abstract and Presentation Materials: https://www.blackhat.com/us-23/briefings/schedule/#how-not-to-train-your-hack-bot-dos-and-donts-of-building-offensive-gpts-32773