Hamiltonian Cycle Problem and Markov Chains


Book Description

This research monograph summarizes a line of research that maps certain classical problems of discrete mathematics and operations research - such as the Hamiltonian Cycle and the Travelling Salesman Problems - into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning these results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems. In particular, the approaches summarized here build on a technique that embeds Hamiltonian Cycle and Travelling Salesman Problems in a structured singularly perturbed Markov decision process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies. The above innovative approach has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. However, these results and algorithms are dispersed over many research papers appearing in journals catering to disparate audiences. As a result, the published manuscripts are often written in a very terse manner and use disparate notation, thereby making it difficult for new researchers to make use of the many reported advances. Hence the main purpose of this book is to present a concise and yet easily accessible synthesis of the majority of the theoretical and algorithmic results obtained so far. In addition, the book discusses numerous open questions and problems that arise from this body of work and which are yet to be fully solved. The approach casts the Hamiltonian Cycle Problem in a mathematical framework that permits analytical concepts and techniques, not used hitherto in this context, to be brought to bear to further clarify both the underlying difficulty of NP-completeness of this problem and the relative exceptionality of truly difficult instances. Finally, the material is arranged in such a manner that the introductory chapters require very little mathematical background and discuss instances of graphs with interesting structures that motivated a lot of the research in this topic. More difficult results are introduced later and are illustrated with numerous examples.




Operations Research Proceedings 2019


Book Description

This book gathers a selection of peer-reviewed papers presented at the International Conference on Operations Research (OR 2019), which was held at Technische Universität Dresden, Germany, on September 4-6, 2019, and was jointly organized by the German Operations Research Society (GOR) the Austrian Operations Research Society (ÖGOR), and the Swiss Operational Research Society (SOR/ASRO). More than 600 scientists, practitioners and students from mathematics, computer science, business/economics and related fields attended the conference and presented more than 400 papers in plenary presentations, parallel topic streams, as well as special award sessions. The respective papers discuss classical mathematical optimization, statistics and simulation techniques. These are complemented by computer science methods, and by tools for processing data, designing and implementing information systems. The book also examines recent advances in information technology, which allow big data volumes to be processed and enable real-time predictive and prescriptive business analytics to drive decisions and actions. Lastly, it includes problems modeled and treated while taking into account uncertainty, risk management, behavioral issues, etc.




Controlled Markov Chains, Graphs and Hamiltonicity


Book Description

"Controlled Markov Chains, Graphs & Hamiltonicity" summarizes a line of research that maps certain classical problems of discrete mathematics--such as the Hamiltonian cycle and the Traveling Salesman problems--into convex domains where continuum analysis can be carried out. (Mathematics)




Graph Representation Learning


Book Description

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.







Mathematical Foundations for Signal Processing, Communications, and Networking


Book Description

Mathematical Foundations for Signal Processing, Communications, and Networking describes mathematical concepts and results important in the design, analysis, and optimization of signal processing algorithms, modern communication systems, and networks. Helping readers master key techniques and comprehend the current research literature, the book offers a comprehensive overview of methods and applications from linear algebra, numerical analysis, statistics, probability, stochastic processes, and optimization. From basic transforms to Monte Carlo simulation to linear programming, the text covers a broad range of mathematical techniques essential to understanding the concepts and results in signal processing, telecommunications, and networking. Along with discussing mathematical theory, each self-contained chapter presents examples that illustrate the use of various mathematical concepts to solve different applications. Each chapter also includes a set of homework exercises and readings for additional study. This text helps readers understand fundamental and advanced results as well as recent research trends in the interrelated fields of signal processing, telecommunications, and networking. It provides all the necessary mathematical background to prepare students for more advanced courses and train specialists working in these areas.




Mathematical Reviews


Book Description




Introduction to Random Graphs


Book Description

The text covers random graphs from the basic to the advanced, including numerous exercises and recommendations for further reading.




Reinforcement Learning, second edition


Book Description

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.




Digraphs


Book Description

The study of directed graphs (digraphs) has developed enormously over recent decades, yet the results are rather scattered across the journal literature. This is the first book to present a unified and comprehensive survey of the subject. In addition to covering the theoretical aspects, the authors discuss a large number of applications and their generalizations to topics such as the traveling salesman problem, project scheduling, genetics, network connectivity, and sparse matrices. Numerous exercises are included. For all graduate students, researchers and professionals interested in graph theory and its applications, this book will be essential reading.