Efficient Large-Scale Machine Learning Algorithms for Genomic Sequences


Book Description

High-throughput sequencing (HTS) has led to many breakthroughs in basic and translational biology research. With this technology, researchers can interrogate whole genomes at single-nucleotide resolution. The large volume of data generated by HTS experiments necessitates the development of novel algorithms that can efficiently process these data. At the advent of HTS, several rudimentary methods were proposed. Often, these methods applied compromising strategies such as discarding a majority of the data or reducing the complexity of the models. This thesis focuses on the development of machine learning methods for efficiently capturing complex patterns from high volumes of HTS data.First, we focus on on de novo motif discovery, a popular sequence analysis method that predates HTS. Given multiple input sequences, the goal of motif discovery is to identify one or more candidate motifs, which are biopolymer sequence patterns that are conjectured to have biological significance. In the context of transcription factor (TF) binding, motifs may represent the sequence binding preference of proteins. Traditional motif discovery algorithms do not scale well with the number of input sequences, which can make motif discovery intractable for the volume of data generated by HTS experiments. One common solution is to only perform motif discovery on a small fraction of the sequences. Scalable algorithms that simplify the motif models are popular alternatives. Our approach is a stochastic method that is scalable and retains the modeling power of past methods.Second, we leverage deep learning methods to annotate the pathogenicity of genetic variants. Deep learning is a class of machine learning algorithms concerned with deep neural networks (DNNs). DNNs use a cascade of layers of nonlinear processing units for feature extraction and transformation. Each layer uses the output from the previous layer as its input. Similar to our novel motif discovery algorithm, artificial neural networks can be efficiently trained in a stochastic manner. Using a large labeled dataset comprised of tens of millions of pathogenic and benign genetic variants, we trained a deep neural network to discriminate between the two categories. Previous methods either focused only on variants lying in protein coding regions, which cover less than 2% of the human genome, or applied simpler models such as linear support vector machines, which can not usually capture non-linear patterns like deep neural networks can.Finally, we discuss convolutional (CNN) and recurrent (RNN) neural networks, variations of DNNs that are especially well-suited for studying sequential data. Specifically, we stacked a bidirectional recurrent layer on top of a convolutional layer to form a hybrid model. The model accepts raw DNA sequences as inputs and predicts chromatin markers, including histone modifications, open chromatin, and transcription factor binding. In this specific application, the convolutional kernels are analogous to motifs, hence the model learning is essentially also performing motif discovery. Compared to a pure convolutional model, the hybrid model requires fewer free parameters to achieve superior performance. We conjecture that the recurrent layer allows our model spatial and orientation dependencies among motifs better than a pure convolutional model can. With some modifications to this framework, the model can accept cell type-specific features, such as gene expression and open chromatin DNase I cleavage, to accurately predict transcription factor binding across cell types. We submitted our model to the ENCODE-DREAM in vivo Transcription Factor Binding Site Prediction Challenge, where it was among the top performing models. We implemented several novel heuristics, which significantly reduced the training time and the computational overhead. These heuristics were instrumental to meet the Challenge deadlines and to make the method more accessible for the research community.HTS has already transformed the landscape of basic and translational research, proving itself as a mainstay of modern biological research. As more data are generated and new assays are developed, there will be an increasing need for computational methods to integrate the data to yield new biological insights. We have only begun to scratch the surface of discovering what is possible from both an experimental and a computational perspective. Thus, further development of versatile and efficient statistical models is crucial to maintaining the momentum for new biological discoveries.




Large-scale Genome Sequence Processing


Book Description

Efficient computer programs have made it possible to elucidate and analyze large-scale genomic sequences. Fundamental tasks, such as the assembly of numerous whole-genome shotgun fragments, the alignment of complementary DNA sequences with a long genome, and the design of gene-specific primers or oligomers, require efficient algorithms and state-of-the-art implementation techniques. This textbook emphasizes basic software implementation techniques for processing large-scale genome sequences and provides executable sample programs. Book jacket.




Machine Learning for Large-scale Genomics


Book Description

Genomic malformations are believed to be the driving factors of many diseases. Therefore, understanding the intrinsic mechanisms underlying the genome and informing clinical practices have become two important missions of large-scale genomic research. Recently, high-throughput molecular data have provided abundant information about the whole genome, and have popularized computational tools in genomics. However, traditional machine learning methodologies often suffer from strong limitations when dealing with high-throughput genomic data, because the latter are usually very high dimensional, highly heterogeneous, and can show complicated nonlinear effects. In this thesis, we present five new algorithms or models to address these challenges, each of which is applied to a specific genomic problem. Project 1 focuses on model selection in cancer diagnosis. We develop an efficient algorithm (ADMM-ENSVM) for the Elastic Net Support Vector Machine, which achieves simultaneous variable selection and max-margin classification. On a colon cancer diagnosis dataset, ADMM-ENSVM shows advantages over other SVM algorithms in terms of diagnostic accuracy, feature selection ability, and computational efficiency. Project 2 focuses on model selection in gene correlation analysis. We develop an efficient algorithm (SBLVGG) using the similar methodology as of ADMM-ENSVM for the Latent Variable Gaussian Graphical Model (LVGG). LVGG models the marginal concentration matrix of observed variables as a combination of a sparse matrix and a low rank one. Evaluated on a microarray dataset containing 6,316 genes, SBLVGG is notably faster than the state-of-the-art LVGG solver, and shows that most of the correlation among genes can be effectively explained by only tens of latent factors. Project 3 focuses on ensemble learning in cancer survival analysis. We develop a gradient boosting model (GBMCI), which does not explicitly assume particular forms of hazard functions, but trains an ensemble of regression trees to approximately optimize the concordance index. We benchmark the performance of GBMCI against several popular survival models on a large-scale breast cancer prognosis dataset. GBMCI consistently outperforms other methods based on a number of feature representations, which are heterogeneous and contain missing values. Project 4 focuses on deep learning in gene expression inference (GEIDN). GEIDN is a large-scale neural network, which can infer ~21k target genes jointly from ~1k landmark genes and can naturally capture hierarchical nonlinear interactions among genes. We deploy deep learning techniques (drop out, momentum training, GPU computing, etc.) to train GEIDN. On a dataset of ~129k complete human transcriptomes, GEIDN outperforms both k-nearest neighbor regression and linear regression in predicting >99.96% of the target genes. Moreover, increased network scales help to improve GEIDN, while increased training data benefits GEIDN more than other methods. Project 5 focuses on deep learning in annotating coding and noncoding genetic variants (DANN). DANN is a neural network to differentiate evolutionarily derived alleles from simulated ones with 949 highly heterogeneous features. It can capture nonlinear relationships among features. We train DANN with deep learning techniques like for GEIDN. DANN achieves a 18.90% relative reduction in the error rate and a 14.52% relative increase in the area under the curve over CADD, a state-of-the-art algorithm to annotate genetic variants based on the linear SVM.




Artificial Intelligence


Book Description

Artificial intelligence (AI) is taking on an increasingly important role in our society today. In the early days, machines fulfilled only manual activities. Nowadays, these machines extend their capabilities to cognitive tasks as well. And now AI is poised to make a huge contribution to medical and biological applications. From medical equipment to diagnosing and predicting disease to image and video processing, among others, AI has proven to be an area with great potential. The ability of AI to make informed decisions, learn and perceive the environment, and predict certain behavior, among its many other skills, makes this application of paramount importance in today's world. This book discusses and examines AI applications in medicine and biology as well as challenges and opportunities in this fascinating area.




Machine Learning Methods for Multi-Omics Data Integration


Book Description

The advancement of biomedical engineering has enabled the generation of multi-omics data by developing high-throughput technologies, such as next-generation sequencing, mass spectrometry, and microarrays. Large-scale data sets for multiple omics platforms, including genomics, transcriptomics, proteomics, and metabolomics, have become more accessible and cost-effective over time. Integrating multi-omics data has become increasingly important in many research fields, such as bioinformatics, genomics, and systems biology. This integration allows researchers to understand complex interactions between biological molecules and pathways. It enables us to comprehensively understand complex biological systems, leading to new insights into disease mechanisms, drug discovery, and personalized medicine. Still, integrating various heterogeneous data types into a single learning model also comes with challenges. In this regard, learning algorithms have been vital in analyzing and integrating these large-scale heterogeneous data sets into one learning model. This book overviews the latest multi-omics technologies, machine learning techniques for data integration, and multi-omics databases for validation. It covers different types of learning for supervised and unsupervised learning techniques, including standard classifiers, deep learning, tensor factorization, ensemble learning, and clustering, among others. The book categorizes different levels of integrations, ranging from early, middle, or late-stage among multi-view models. The underlying models target different objectives, such as knowledge discovery, pattern recognition, disease-related biomarkers, and validation tools for multi-omics data. Finally, the book emphasizes practical applications and case studies, making it an essential resource for researchers and practitioners looking to apply machine learning to their multi-omics data sets. The book covers data preprocessing, feature selection, and model evaluation, providing readers with a practical guide to implementing machine learning techniques on various multi-omics data sets.




Genomics at the Nexus of AI, Computer Vision, and Machine Learning


Book Description

The book provides a comprehensive understanding of cutting-edge research and applications at the intersection of genomics and advanced AI techniques and serves as an essential resource for researchers, bioinformaticians, and practitioners looking to leverage genomics data for AI-driven insights and innovations. The book encompasses a wide range of topics, starting with an introduction to genomics data and its unique characteristics. Each chapter unfolds a unique facet, delving into the collaborative potential and challenges that arise from advanced technologies. It explores image analysis techniques specifically tailored for genomic data. It also delves into deep learning showcasing the power of convolutional neural networks (CNN) and recurrent neural networks (RNN) in genomic image analysis and sequence analysis. Readers will gain practical knowledge on how to apply deep learning techniques to unlock patterns and relationships in genomics data. Transfer learning, a popular technique in AI, is explored in the context of genomics, demonstrating how knowledge from pre-trained models can be effectively transferred to genomic datasets, leading to improved performance and efficiency. Also covered is the domain adaptation techniques specifically tailored for genomics data. The book explores how genomics principles can inspire the design of AI algorithms, including genetic algorithms, evolutionary computing, and genetic programming. Additional chapters delve into the interpretation of genomic data using AI and ML models, including techniques for feature importance and visualization, as well as explainable AI methods that aid in understanding the inner workings of the models. The applications of genomics in AI span various domains, and the book explores AI-driven drug discovery and personalized medicine, genomic data analysis for disease diagnosis and prognosis, and the advancement of AI-enabled genomic research. Lastly, the book addresses the ethical considerations in integrating genomics with AI, computer vision, and machine learning. Audience The book will appeal to biomedical and computer/data scientists and researchers working in genomics and bioinformatics seeking to leverage AI, computer vision, and machine learning for enhanced analysis and discovery; healthcare professionals advancing personalized medicine and patient care; industry leaders and decision-makers in biotechnology, pharmaceuticals, and healthcare industries seeking strategic insights into the integration of genomics and advanced technologies.




The Burrows-Wheeler Transform:


Book Description

The Burrows-Wheeler Transform is one of the best lossless compression me- ods available. It is an intriguing — even puzzling — approach to squeezing redundancy out of data, it has an interesting history, and it has applications well beyond its original purpose as a compression method. It is a relatively late addition to the compression canon, and hence our motivation to write this book, looking at the method in detail, bringing together the threads that led to its discovery and development, and speculating on what future ideas might grow out of it. The book is aimed at a wide audience, ranging from those interested in learning a little more than the short descriptions of the BWT given in st- dard texts, through to those whose research is building on what we know about compression and pattern matching. The ?rst few chapters are a careful description suitable for readers with an elementary computer science ba- ground (and these chapters have been used in undergraduate courses), but later chapters collect a wide range of detailed developments, some of which are built on advanced concepts from a range of computer science topics (for example, some of the advanced material has been used in a graduate c- puter science course in string algorithms). Some of the later explanations require some mathematical sophistication, but most should be accessible to those with a broad background in computer science.




Machine Learning and Data Mining


Book Description

Master the new computational tools to get the most out of your information system. This practical guide, the first to clearly outline the situation for the benefit of engineers and scientists, provides a straightforward introduction to basic machine learning and data mining methods, covering the analysis of numerical, text, and sound data.




Algorithms on Strings, Trees, and Sequences


Book Description

String algorithms are a traditional area of study in computer science. In recent years their importance has grown dramatically with the huge increase of electronically stored text and of molecular sequence data (DNA or protein sequences) produced by various genome projects. This book is a general text on computer algorithms for string processing. In addition to pure computer science, the book contains extensive discussions on biological problems that are cast as string problems, and on methods developed to solve them. It emphasises the fundamental ideas and techniques central to today's applications. New approaches to this complex material simplify methods that up to now have been for the specialist alone. With over 400 exercises to reinforce the material and develop additional topics, the book is suitable as a text for graduate or advanced undergraduate students in computer science, computational biology, or bio-informatics. Its discussion of current algorithms and techniques also makes it a reference for professionals.




Genome-Scale Algorithm Design


Book Description

High-throughput sequencing has revolutionised the field of biological sequence analysis. Its application has enabled researchers to address important biological questions, often for the first time. This book provides an integrated presentation of the fundamental algorithms and data structures that power modern sequence analysis workflows. The topics covered range from the foundations of biological sequence analysis (alignments and hidden Markov models), to classical index structures (k-mer indexes, suffix arrays and suffix trees), Burrows–Wheeler indexes, graph algorithms and a number of advanced omics applications. The chapters feature numerous examples, algorithm visualisations, exercises and problems, each chosen to reflect the steps of large-scale sequencing projects, including read alignment, variant calling, haplotyping, fragment assembly, alignment-free genome comparison, transcript prediction and analysis of metagenomic samples. Each biological problem is accompanied by precise formulations, providing graduate students and researchers in bioinformatics and computer science with a powerful toolkit for the emerging applications of high-throughput sequencing.