Introduction

Machine learning is a prevalent tool in many computing applications. One potential concern with learning algorithms is that they may introduce a security fault into the system. The key strengths of learning approaches are their adaptability and ability to infer patterns that can be used for predictions or decision-making. However, these assets of learning can potentially be subverted by adversarial manipulation of the learner’s environment, which exposes applications that use machine learning techniques to a new class of security vulnerabilities; i.e., learners are susceptible to a novel class of attacks that can cause the learner to disrupt the system it was intended to improve. This project analyzes the behavior of learning systems under duress in security-sensitive domains. Learning algorithms are vulnerable to a myriad of attacks that can transform the learner into a liability for the system they are intended to aid, but by critically analyzing potential security threats, the extent of these threats can be assessed, proper learning techniques can be selected to minimize the adversary’s impact, and failures of system can be averted.

In this project, the PI will investigate both the practical and theoretical aspects of applying machine learning to security domains. The project consists of three main tasks: (1) developing a taxonomy for qualifying the security vulnerabilities of a learner, (2) investigating two novel practical attack and defense scenarios, and (3) developing game-theoretic approach to designing defensive classifier. The PI will present a framework for identifying and analyzing threats to learners and use it to systematically explore the vulnerabilities of two learning systems. For these systems, the PI will identify real-world threats, analyze the potential impact of each, and study learning techniques that significantly diminish their vulnerabilities. In doing so, the PI will provide practitioners with guidelines to identify potential vulnerabilities and demonstrate improved learning techniques resilient to attacks. This project will focus on learning tasks in virus, spam, and network anomaly detection, but also the techniques developed will be broadly applicable across many systems and security domains and has far-reaching implications for any system that incorporates learning.

Leave a Reply

Your email address will not be published. Required fields are marked *