Combinatorial Library Design and Evaluation


Book Description

This text traces developments in rational drug discovery and combinatorial library design with contributions from 50 leading scientists in academia and industry who offer coverage of basic principles, design strategies, methodologies, software tools and algorithms, and applications. It outlines the fundamentals of pharmacophore modelling and 3D Quantitative Structure-Activity Relationships (QSAR), classical QSAR, and target protein structure-based design methods.




Combinatorial Library


Book Description

The continued successes of large- and small-scale genome sequencing projects are increasing the number of genomic targets available for drug d- covery at an exponential rate. In addition, a better understanding of molecular mechanisms—such as apoptosis, signal transduction, telomere control of ch- mosomes, cytoskeletal development, modulation of stress-related proteins, and cell surface display of antigens by the major histocompatibility complex m- ecules—has improved the probability of identifying the most promising genomic targets to counteract disease. As a result, developing and optimizing lead candidates for these targets and rapidly moving them into clinical trials is now a critical juncture in pharmaceutical research. Recent advances in com- natorial library synthesis, purification, and analysis techniques are not only increasing the numbers of compounds that can be tested against each specific genomic target, but are also speeding and improving the overall processes of lead discovery and optimization. There are two main approaches to combinatorial library production: p- allel chemical synthesis and split-and-mix chemical synthesis. These approaches can utilize solid- or solution-based synthetic methods, alone or in combination, although the majority of combinatorial library synthesis is still done on solid support. In a parallel synthesis, all the products are assembled separately in their own reaction vessels or microtiter plates. The array of rows and columns enables researchers to organize the building blocks to be c- bined, and provides an easy way to identify compounds in a particular well.




Chemogenomics and Chemical Genetics


Book Description

Biological and chemical sciences have undergone an unprecedented transformation, reflected by the huge use of parallel and automated technologies in key fields such as genome sequencing, DNA chips, nanoscale functional biology or combinatorial chemistry. It is now possible to generate and store from tens of thousands to millions of new small molecules, based on enhanced chemical synthesis strategies. Automated screening of small molecules is one of the technologies that has revolutionized biology, first developed for the pharmaceutical industry and recently introduced in academic laboratories. High-throughput and high-content screening allow the identification of bioactive compounds in collections of molecules (chemical libraries), being effective on biological targets defined at various organisational scales, from proteins to cells to complete organisms. These bioactive molecules can be therapeutic drug candidates, molecules for biotech, diagnostic or agronomic applications, or tools for basic research. Handling a large number of biological (genomic and post-genomic), chemical and experimental information, screening approaches cannot be envisaged without any electronic storage and mathematical treatment of the data. “Chemogenomics and Chemical Genetics” is an introductory manual presenting methods and concepts making up the basis for this recent discipline. This book is dedicated to biologists, chemists and computer scientist beginners. It is organized in brief, illustrated chapters with practical examples. Clear definitions of biological, chemical and IT concepts are given in a glossary section to help readers who are not familiar with one of these disciplines. "Chemogenomics and Chemical Genetics" should therefore be helpful for students (from Bachelor's degree level), technological platform engineers, and researchers in biology, chemistry, bioinformatics, cheminformatics, both in biotech and academic laboratories.




Chemoinformatics


Book Description

In the literature, several terms are used synonymously to name the topic of this book: chem-, chemi-, or chemo-informatics. A widely recognized de- nition of this discipline is the one by Frank Brown from 1998 (1) who defined chemoinformatics as the combination of “all the information resources that a scientist needs to optimize the properties of a ligand to become a drug. ” In Brown’s definition, two aspects play a fundamentally important role: de- sion support by computational means and drug discovery, which distinguishes it from the term “chemical informatics” that was introduced at least ten years earlier and described as the application of information technology to ch- istry (not with a specific focus on drug discovery). In addition, there is of course “chemometrics,” which is generally understood as the application of statistical methods to chemical data and the derivation of relevant statistical models and descriptors (2). The pharmaceutical focus of many developments and efforts in this area—and the current popularity of gene-to-drug or si- lar paradigms—is further reflected by the recent introduction of such terms as “discovery informatics” (3), which takes into account that gaining kno- edge from chemical data alone is not sufficient to be ultimately successful in drug discovery. Such insights are well in accord with other views that the boundaries between bio- and chemoinformatics are fluid and that these d- ciplines should be closely combined or merged to significantly impact b- technology or pharmaceutical research (4).




Computational Biochemistry and Biophysics


Book Description

Covering theoretical methods and computational techniques in biomolecular research, this book focuses on approaches for the treatment of macromolecules, including proteins, nucleic acids, and bilayer membranes. It uses concepts in free energy calculations, conformational analysis, reaction rates, and transition pathways to calculate and interpret b




The Organic Chemistry of Drug Design and Drug Action


Book Description

Standard medicinal chemistry courses and texts are organized by classes of drugs with an emphasis on descriptions of their biological and pharmacological effects. This book represents a new approach based on physical organic chemical principles and reaction mechanisms that allow the reader to extrapolate to many related classes of drug molecules. The Second Edition reflects the significant changes in the drug industry over the past decade, and includes chapter problems and other elements that make the book more useful for course instruction. - New edition includes new chapter problems and exercises to help students learn, plus extensive references and illustrations - Clearly presents an organic chemist's perspective of how drugs are designed and function, incorporating the extensive changes in the drug industry over the past ten years - Well-respected author has published over 200 articles, earned 21 patents, and invented a drug that is under consideration for commercialization




Molecular Diversity in Drug Design


Book Description

High-throughput screening and combinatorial chemistry are two of the most potent weapons ever to have been used in the discovery of new drugs. At a stroke, it seems to be possible to synthesise more molecules in a month than have previously been made in the whole of the distinguished history of organic chemistry, Furthermore, all the molecules can be screened in the same short period. However, like any weapons of immense power, these techniques must be used with care, to achieve maximum impact. The costs of implementing and running high-throughput screening and combinatorial chemistry are high, as large dedicated facilities must be built and staffed. In addition, the sheer number of chemical leads generated may overwhelm the lead optimisation teams in a hail of friendly fire. Mother nature has not entirely surrendered, as the number of building blocks that could be used to build libraries would require more atoms than there are in the universe. In addition, the progress made by the Human Genome Project has uncovered many proteins with different functions but related binding sites, creating issues of selectivity. Advances in the new field of pharmacogenomics will produce more of these challenges. There is a real need to make hi- throughput screening and combinatorial chemistry into 'smart' weapons, so that their power is not dissipated. That is the challenge for modellers, computational chemists, cheminformaticians and IT experts. In this book, we have broken down this grand challenge into key tasks.




Virtual Screening in Drug Discovery


Book Description

Virtual screening can reduce costs and increase hit rates for lead discovery by eliminating the need for robotics, reagent acquisition or production, and compound storage facilities. The increased robustness of computational algorithms and scoring functions, the availability of affordable computational power, and the potential for timely structural




An Introduction to Chemoinformatics


Book Description

This book aims to provide an introduction to the major techniques of chemoinformatics. It is the first text written specifically for this field. The first part of the book deals with the representation of 2D and 3D molecular structures, the calculation of molecular descriptors and the construction of mathematical models. The second part describes other important topics including molecular similarity and diversity, the analysis of large data sets, virtual screening, and library design. Simple illustrative examples are used throughout to illustrate key concepts, supplemented with case studies from the literature.




Soft Computing Approaches in Chemistry


Book Description

The contributions to this book cover a wide range of applications of Soft Computing to the chemical domain. The early roots of Soft Computing can be traced back to Lotfi Zadeh's work on soft data analysis [1] published in 1981. 'Soft Computing' itself became fully established about 10 years later, when the Berkeley Initiative in Soft Computing (SISC), an industrial liaison program, was put in place at the University of California - Berkeley. Soft Computing applications are characterized by their ability to: • approximate many different kinds of real-world systems; • tolerate imprecision, partial truth, and uncertainty; and • learn from their environment. Such characteristics commonly lead to a better ability to match reality than other approaches can provide, generating solutions of low cost, high robustness, and tractability. Zadeh has argued that soft computing provides a solid foundation for the conception, design, and application of intelligent systems employing its methodologies symbiotically rather than in isolation. There exists an implicit commitment to take advantage of the fusion of the various methodologies, since such a fusion can lead to combinations that may provide performance well beyond that offered by any single technique.