Visual Intelligence


Book Description

An engrossing guide to seeing—and communicating—more clearly from the groundbreaking course that helps FBI agents, cops, CEOs, ER docs, and others save money, reputations, and lives. How could looking at Monet’s water lily paintings help save your company millions? How can checking out people’s footwear foil a terrorist attack? How can your choice of adjective win an argument, calm your kid, or catch a thief? In her celebrated seminar, the Art of Perception, art historian Amy Herman has trained experts from many fields how to perceive and communicate better. By showing people how to look closely at images, she helps them hone their “visual intelligence,” a set of skills we all possess but few of us know how to use properly. She has spent more than a decade teaching doctors to observe patients instead of their charts, helping police officers separate facts from opinions when investigating a crime, and training professionals from the FBI, the State Department, Fortune 500 companies, and the military to recognize the most pertinent and useful information. Her lessons highlight far more than the physical objects you may be missing; they teach you how to recognize the talents, opportunities, and dangers that surround you every day. Whether you want to be more effective on the job, more empathetic toward your loved ones, or more alert to the trove of possibilities and threats all around us, this book will show you how to see what matters most to you more clearly than ever before. Please note: this ebook contains full-color art reproductions and photographs, and color is at times essential to the observation and analysis skills discussed in the text. For the best reading experience, this ebook should be viewed on a color device.




Feynman's Lost Lecture


Book Description

"Glorious."—Wall Street Journal Rescued from obscurity, Feynman's Lost Lecture is a blessing for all Feynman followers. Most know Richard Feynman for the hilarious anecdotes and exploits in his best-selling books "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?" But not always obvious in those stories was his brilliance as a pure scientist—one of the century's greatest physicists. With this book and CD, we hear the voice of the great Feynman in all his ingenuity, insight, and acumen for argument. This breathtaking lecture—"The Motion of the Planets Around the Sun"—uses nothing more advanced than high-school geometry to explain why the planets orbit the sun elliptically rather than in perfect circles, and conclusively demonstrates the astonishing fact that has mystified and intrigued thinkers since Newton: Nature obeys mathematics. David and Judith Goodstein give us a beautifully written short memoir of life with Feynman, provide meticulous commentary on the lecture itself, and relate the exciting story of their effort to chase down one of Feynman's most original and scintillating lectures.




Successful Intelligence


Book Description

Argues people need 3 kinds of intelligence to be successful in life: analytical, creative and practical.




Active Learning


Book Description

The key idea behind active learning is that a machine learning algorithm can perform better with less training if it is allowed to choose the data from which it learns. An active learner may pose "queries," usually in the form of unlabeled data instances to be labeled by an "oracle" (e.g., a human annotator) that already understands the nature of the problem. This sort of approach is well-motivated in many modern machine learning and data mining applications, where unlabeled data may be abundant or easy to come by, but training labels are difficult, time-consuming, or expensive to obtain. This book is a general introduction to active learning. It outlines several scenarios in which queries might be formulated, and details many query selection algorithms which have been organized into four broad categories, or "query selection frameworks." We also touch on some of the theoretical foundations of active learning, and conclude with an overview of the strengths and weaknesses of these approaches in practice, including a summary of ongoing work to address these open challenges and opportunities. Table of Contents: Automating Inquiry / Uncertainty Sampling / Searching Through the Hypothesis Space / Minimizing Expected Error and Variance / Exploiting Structure in Data / Theory / Practical Considerations




Lifelong Machine Learning, Second Edition


Book Description

Lifelong Machine Learning, Second Edition is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks—which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning—most notably, multi-task learning, transfer learning, and meta-learning—because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields.




Return of the God Hypothesis


Book Description

The New York Times bestselling author of Darwin’s Doubt presents groundbreaking scientific evidence of the existence of God, based on breakthroughs in physics, cosmology, and biology. Beginning in the late 19th century, many intellectuals began to insist that scientific knowledge conflicts with traditional theistic belief—that science and belief in God are “at war.” Philosopher of science Stephen Meyer challenges this view by examining three scientific discoveries with decidedly theistic implications. Building on the case for the intelligent design of life that he developed in Signature in the Cell and Darwin’s Doubt, Meyer demonstrates how discoveries in cosmology and physics coupled with those in biology help to establish the identity of the designing intelligence behind life and the universe. Meyer argues that theism—with its affirmation of a transcendent, intelligent and active creator—best explains the evidence we have concerning biological and cosmological origins. Previously Meyer refrained from attempting to answer questions about “who” might have designed life. Now he provides an evidence-based answer to perhaps the ultimate mystery of the universe. In so doing, he reveals a stunning conclusion: the data support not just the existence of an intelligent designer of some kind—but the existence of a personal God.




A Concise Introduction to Multiagent Systems and Distributed Artificial Intelligence


Book Description

Multiagent systems is an expanding field that blends classical fields like game theory and decentralized control with modern fields like computer science and machine learning. This monograph provides a concise introduction to the subject, covering the theoretical foundations as well as more recent developments in a coherent and readable manner. The text is centered on the concept of an agent as decision maker. Chapter 1 is a short introduction to the field of multiagent systems. Chapter 2 covers the basic theory of singleagent decision making under uncertainty. Chapter 3 is a brief introduction to game theory, explaining classical concepts like Nash equilibrium. Chapter 4 deals with the fundamental problem of coordinating a team of collaborative agents. Chapter 5 studies the problem of multiagent reasoning and decision making under partial observability. Chapter 6 focuses on the design of protocols that are stable against manipulations by self-interested agents. Chapter 7 provides a short introduction to the rapidly expanding field of multiagent reinforcement learning. The material can be used for teaching a half-semester course on multiagent systems covering, roughly, one chapter per lecture.




Two Lectures


Book Description




Markov Logic


Book Description

Most subfields of computer science have an interface layer via which applications communicate with the infrastructure, and this is key to their success (e.g., the Internet in networking, the relational model in databases, etc.). So far this interface layer has been missing in AI. First-order logic and probabilistic graphical models each have some of the necessary features, but a viable interface layer requires combining both. Markov logic is a powerful new language that accomplishes this by attaching weights to first-order formulas and treating them as templates for features of Markov random fields. Most statistical models in wide use are special cases of Markov logic, and first-order logic is its infinite-weight limit. Inference algorithms for Markov logic combine ideas from satisfiability, Markov chain Monte Carlo, belief propagation, and resolution. Learning algorithms make use of conditional likelihood, convex optimization, and inductive logic programming. Markov logic has been successfully applied to problems in information extraction and integration, natural language processing, robot mapping, social networks, computational biology, and others, and is the basis of the open-source Alchemy system. Table of Contents: Introduction / Markov Logic / Inference / Learning / Extensions / Applications / Conclusion