Theoretical Foundations of Adversarial Binary Detection


Book Description

This monograph, aimed at students, researchers and practitioners working in the application areas who want an accessible introduction to the theory behind Adversarial Binary Detection and the possible solutions to their particular problem.




Adversarial Machine Learning


Book Description

A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed. We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantification of the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications. In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning.




Adversarial Machine Learning


Book Description

This is a technical overview of the field of adversarial machine learning which has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicious objects they develop. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.




Game Theory and Machine Learning for Cyber Security


Book Description

GAME THEORY AND MACHINE LEARNING FOR CYBER SECURITY Move beyond the foundations of machine learning and game theory in cyber security to the latest research in this cutting-edge field In Game Theory and Machine Learning for Cyber Security, a team of expert security researchers delivers a collection of central research contributions from both machine learning and game theory applicable to cybersecurity. The distinguished editors have included resources that address open research questions in game theory and machine learning applied to cyber security systems and examine the strengths and limitations of current game theoretic models for cyber security. Readers will explore the vulnerabilities of traditional machine learning algorithms and how they can be mitigated in an adversarial machine learning approach. The book offers a comprehensive suite of solutions to a broad range of technical issues in applying game theory and machine learning to solve cyber security challenges. Beginning with an introduction to foundational concepts in game theory, machine learning, cyber security, and cyber deception, the editors provide readers with resources that discuss the latest in hypergames, behavioral game theory, adversarial machine learning, generative adversarial networks, and multi-agent reinforcement learning. Readers will also enjoy: A thorough introduction to game theory for cyber deception, including scalable algorithms for identifying stealthy attackers in a game theoretic framework, honeypot allocation over attack graphs, and behavioral games for cyber deception An exploration of game theory for cyber security, including actionable game-theoretic adversarial intervention detection against advanced persistent threats Practical discussions of adversarial machine learning for cyber security, including adversarial machine learning in 5G security and machine learning-driven fault injection in cyber-physical systems In-depth examinations of generative models for cyber security Perfect for researchers, students, and experts in the fields of computer science and engineering, Game Theory and Machine Learning for Cyber Security is also an indispensable resource for industry professionals, military personnel, researchers, faculty, and students with an interest in cyber security.




Information Theory, Mathematical Optimization, and Their Crossroads in 6G System Design


Book Description

This book provides a broad understanding of the fundamental tools and methods from information theory and mathematical programming, as well as specific applications in 6G and beyond system designs. The contents focus on not only both theories but also their intersection in 6G. Motivations are from the multitude of new developments which will arise once 6G systems integrate new communication networks with AIoT (Artificial Intelligence plus Internet of Things). Design issues such as the intermittent connectivity, low latency, federated learning, IoT security, etc., are covered. This monograph provides a thorough picture of new results from information and optimization theories, as well as how their dialogues work to solve aforementioned 6G design issues.




Deep Learning Theory and Applications


Book Description

This book consitiutes the refereed proceedings of the 4th International Conference on Deep Learning Theory and Applications, DeLTA 2023, held in Rome, Italy from 13 to 14 July 2023. The 9 full papers and 22 short papers presented were thoroughly reviewed and selected from the 42 qualified submissions. The scope of the conference includes such topics as models and algorithms; machine learning; big data analytics; computer vision applications; and natural language understanding.




Adversarial Robustness in Machine Learning


Book Description

Deep learning based classification algorithms perform poorly on adversarially perturbed data. Adversarial risk quantifies the performance of a classifier in the presence of an adversary. Numerous definitions of adversarial risk---not all mathematically rigorous and differing subtly in the details---have appeared in the literature. Adversarial attacks are designed to increase the adversarial risk of classifiers, and robust classifiers are sought that can resist such attacks. It was hitherto unknown what the theoretical limits on adversarial risk are, and whether there is an equilibrium in the game between the classifier and the adversary. In this thesis, we establish a mathematically rigorous foundation for adversarial robustness, derive algorithm-independent bounds on adversarial risk, and provide alternative characterizations based on distributional robustness and game theory. Key to these results are the numerous connections we discover between adversarial robustness and optimal transport theory. We begin by examining various definitions for adversarial risk, and laying down conditions for their measurability and equivalences. In binary classification with 0-1 loss, we show that the optimal adversarial risk is determined by an optimal transport cost between the probability distributions of the two classes. Using the couplings that achieve this cost, we derive the optimal robust classifiers for several univariate distributions. Using our results, we compute lower bounds on adversarial risk for several real-world datasets. We extend our results to general loss functions under convexity and smoothness assumptions. We close with alternative characterizations for adversarial robustness that lead to the proof of a pure Nash equilibrium in the two-player game between the adversary and the classifier. We show that adversarial risk is identical to the minimax risk in a robust hypothesis testing problem with Wasserstein uncertainty sets. Moreover, the optimal adversarial risk is the Bayes error between a worst-case pair of distributions belonging to these sets. Our theoretical results lead to several algorithmic insights for practitioners and motivate further study on the intersection of adversarial robustness and optimal transport.




Adversarial Machine Learning


Book Description

Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race.




Understanding Machine Learning


Book Description

Introduces machine learning and its algorithmic paradigms, explaining the principles behind automated learning approaches and the considerations underlying their usage.