A Friendly Introduction to Adversarial Machine Learning
A Friendly Introduction to Adversarial Machine Learning
Evan Wright
Abstract:
The popularity of machine learning applications to cybersecurity has inspired optimism for the future of cybersecurity. The idea is to better identify malicious behavior by using pattern detection techniques that have seen success in fields like computer vision, online- recommendation, and gene sequencing. Increasingly more and more commercial products and solutions are incorporating some form of machine learning. But since good guys can build learning algorithms, why can’t bad guys?
Adversarial machine learning is an emerging topic that investigates the effectiveness of machine learning methods when adversaries are able to “game the system” of how the machine learning detection works. This deception can occur when adversaries have access to the specifics of the pattern-identification system, an oracle, or the data channel that the algorithm bases its ground truth. Next, the adversaries may game the system by selectively generating results to outsmart the machine learning algorithm that would otherwise identify them as malicious. I'll conclude by discussing recent software frameworks and process improvements that can help mitigate this next phase of our collective cybersecurity arms race.
Bio:
Evan Wright is a principal data scientist at Anomali where he focuses on applications of machine learning to threat intelligence. Before Anomali, he was a network security analyst at the CERT Coordination Center and a network administrator in North Carolina.
Evan has supported customers in areas such as IPv6 security, ultra-large scale network monitoring, malicious network traffic detection, intelligence fusion, and other cybersecurity applications of machine learning. He has advised seventeen security operations centers in government and private industry. Evan holds a MS from Carnegie Mellon University, a BS from East Carolina University, a CCNP and six other IT certifications.