Graph Representation Learning


Book Description

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.




Graph Neural Networks: Foundations, Frontiers, and Applications


Book Description

Deep Learning models are at the core of artificial intelligence research today. It is well known that deep learning techniques are disruptive for Euclidean data, such as images or sequence data, and not immediately applicable to graph-structured data such as text. This gap has driven a wave of research for deep learning on graphs, including graph representation learning, graph generation, and graph classification. The new neural network architectures on graph-structured data (graph neural networks, GNNs in short) have performed remarkably on these tasks, demonstrated by applications in social networks, bioinformatics, and medical informatics. Despite these successes, GNNs still face many challenges ranging from the foundational methodologies to the theoretical understandings of the power of the graph representation learning. This book provides a comprehensive introduction of GNNs. It first discusses the goals of graph representation learning and then reviews the history, current developments, and future directions of GNNs. The second part presents and reviews fundamental methods and theories concerning GNNs while the third part describes various frontiers that are built on the GNNs. The book concludes with an overview of recent developments in a number of applications using GNNs. This book is suitable for a wide audience including undergraduate and graduate students, postdoctoral researchers, professors and lecturers, as well as industrial and government practitioners who are new to this area or who already have some basic background but want to learn more about advanced and promising techniques and applications.




Graph Machine Learning


Book Description

Build machine learning algorithms using graph data and efficiently exploit topological information within your models Key Features Implement machine learning techniques and algorithms in graph data Identify the relationship between nodes in order to make better business decisions Apply graph-based machine learning methods to solve real-life problems Book Description Graph Machine Learning will introduce you to a set of tools used for processing network data and leveraging the power of the relation between entities that can be used for predictive, modeling, and analytics tasks. The first chapters will introduce you to graph theory and graph machine learning, as well as the scope of their potential use. You'll then learn all you need to know about the main machine learning models for graph representation learning: their purpose, how they work, and how they can be implemented in a wide range of supervised and unsupervised learning applications. You'll build a complete machine learning pipeline, including data processing, model training, and prediction in order to exploit the full potential of graph data. After covering the basics, you'll be taken through real-world scenarios such as extracting data from social networks, text analytics, and natural language processing (NLP) using graphs and financial transaction systems on graphs. You'll also learn how to build and scale out data-driven applications for graph analytics to store, query, and process network information, and explore the latest trends on graphs. By the end of this machine learning book, you will have learned essential concepts of graph theory and all the algorithms and techniques used to build successful machine learning applications. What you will learn Write Python scripts to extract features from graphs Distinguish between the main graph representation learning techniques Learn how to extract data from social networks, financial transaction systems, for text analysis, and more Implement the main unsupervised and supervised graph embedding techniques Get to grips with shallow embedding methods, graph neural networks, graph regularization methods, and more Deploy and scale out your application seamlessly Who this book is for This book is for data scientists, data analysts, graph analysts, and graph professionals who want to leverage the information embedded in the connections and relations between data points to boost their analysis and model performance using machine learning. It will also be useful for machine learning developers or anyone who wants to build ML-driven graph databases. A beginner-level understanding of graph databases and graph data is required, alongside a solid understanding of ML basics. You'll also need intermediate-level Python programming knowledge to get started with this book.




Deep Learning on Graphs


Book Description

A comprehensive text on foundations and techniques of graph neural networks with applications in NLP, data mining, vision and healthcare.




Introduction to Graph Neural Networks


Book Description

Graphs are useful data structures in complex real-life applications such as modeling physical systems, learning molecular fingerprints, controlling traffic networks, and recommending friends in social networks. However, these tasks require dealing with non-Euclidean graph data that contains rich relational information between elements and cannot be well handled by traditional deep learning models (e.g., convolutional neural networks (CNNs) or recurrent neural networks (RNNs)). Nodes in graphs usually contain useful feature information that cannot be well addressed in most unsupervised representation learning methods (e.g., network embedding methods). Graph neural networks (GNNs) are proposed to combine the feature information and the graph structure to learn better representations on graphs via feature propagation and aggregation. Due to its convincing performance and high interpretability, GNN has recently become a widely applied graph analysis tool. This book provides a comprehensive introduction to the basic concepts, models, and applications of graph neural networks. It starts with the introduction of the vanilla GNN model. Then several variants of the vanilla model are introduced such as graph convolutional networks, graph recurrent networks, graph attention networks, graph residual networks, and several general frameworks. Variants for different graph types and advanced training methods are also included. As for the applications of GNNs, the book categorizes them into structural, non-structural, and other scenarios, and then it introduces several typical models on solving these tasks. Finally, the closing chapters provide GNN open resources and the outlook of several future directions.




Graph Representation Learning


Book Description

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.




Network Embedding


Book Description

This is a comprehensive introduction to the basic concepts, models, and applications of network representation learning (NRL) and the background and rise of network embeddings (NE). It introduces the development of NE techniques by presenting several representative methods on general graphs, as well as a unified NE framework based on matrix factorization. Afterward, it presents the variants of NE with additional information: NE for graphs with node attributes/contents/labels; and the variants with different characteristics: NE for community-structured/large-scale/heterogeneous graphs. Further, the book introduces different applications of NE such as recommendation and information diffusion prediction. Finally, the book concludes the methods and applications and looks forward to the future directions. Many machine learning algorithms require real-valued feature vectors of data instances as inputs. By projecting data into vector spaces, representation learning techniques have achieved promising performance in many areas such as computer vision and natural language processing. There is also a need to learn representations for discrete relational data, namely networks or graphs. Network Embedding (NE) aims at learning vector representations for each node or vertex in a network to encode the topologic structure. Due to its convincing performance and efficiency, NE has been widely applied in many network applications such as node classification and link prediction.




Representation Learning for Natural Language Processing


Book Description

This open access book provides an overview of the recent advances in representation learning theory, algorithms and applications for natural language processing (NLP). It is divided into three parts. Part I presents the representation learning techniques for multiple language entries, including words, phrases, sentences and documents. Part II then introduces the representation techniques for those objects that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, networks, and cross-modal entries. Lastly, Part III provides open resource tools for representation learning techniques, and discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing.




Heterogeneous Graph Representation Learning and Applications


Book Description

Representation learning in heterogeneous graphs (HG) is intended to provide a meaningful vector representation for each node so as to facilitate downstream applications such as link prediction, personalized recommendation, node classification, etc. This task, however, is challenging not only because of the need to incorporate heterogeneous structural (graph) information consisting of multiple types of node and edge, but also the need to consider heterogeneous attributes or types of content (e.g. text or image) associated with each node. Although considerable advances have been made in homogeneous (and heterogeneous) graph embedding, attributed graph embedding and graph neural networks, few are capable of simultaneously and effectively taking into account heterogeneous structural (graph) information as well as the heterogeneous content information of each node. In this book, we provide a comprehensive survey of current developments in HG representation learning. More importantly, we present the state-of-the-art in this field, including theoretical models and real applications that have been showcased at the top conferences and journals, such as TKDE, KDD, WWW, IJCAI and AAAI. The book has two major objectives: (1) to provide researchers with an understanding of the fundamental issues and a good point of departure for working in this rapidly expanding field, and (2) to present the latest research on applying heterogeneous graphs to model real systems and learning structural features of interaction systems. To the best of our knowledge, it is the first book to summarize the latest developments and present cutting-edge research on heterogeneous graph representation learning. To gain the most from it, readers should have a basic grasp of computer science, data mining and machine learning.




Semi-Supervised Learning


Book Description

A comprehensive review of an area of machine learning that deals with the use of unlabeled data in classification problems: state-of-the-art algorithms, a taxonomy of the field, applications, benchmark experiments, and directions for future research. In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.