Optimal Learning


Book Description

Learn the science of collecting information to make effective decisions Everyday decisions are made without the benefit of accurate information. Optimal Learning develops the needed principles for gathering information to make decisions, especially when collecting information is time-consuming and expensive. Designed for readers with an elementary background in probability and statistics, the book presents effective and practical policies illustrated in a wide range of applications, from energy, homeland security, and transportation to engineering, health, and business. This book covers the fundamental dimensions of a learning problem and presents a simple method for testing and comparing policies for learning. Special attention is given to the knowledge gradient policy and its use with a wide range of belief models, including lookup table and parametric and for online and offline problems. Three sections develop ideas with increasing levels of sophistication: Fundamentals explores fundamental topics, including adaptive learning, ranking and selection, the knowledge gradient, and bandit problems Extensions and Applications features coverage of linear belief models, subset selection models, scalar function optimization, optimal bidding, and stopping problems Advanced Topics explores complex methods including simulation optimization, active learning in mathematical programming, and optimal continuous measurements Each chapter identifies a specific learning problem, presents the related, practical algorithms for implementation, and concludes with numerous exercises. A related website features additional applications and downloadable software, including MATLAB and the Optimal Learning Calculator, a spreadsheet-based package that provides an introduction to learning and a variety of policies for learning.




Optimal Learning Environments to Promote Student Engagement


Book Description

Optimal Learning Environments to Promote Student Engagement analyzes the psychological, social, and academic phenomena comprising engagement, framing it as critical to learning and development. Drawing on positive psychology, flow studies, and theories of motivation, the book conceptualizes engagement as a learning experience, explaining how it occurs (or not) and how schools can adapt to maximize it among adolescents. Examples of empirically supported environments promoting engagement are provided, representing alternative high schools, Montessori schools, and extracurricular programs. The book identifies key innovations including community-school partnerships, technology-supported learning, and the potential for engaging learning opportunities during an expanded school day. Among the topics covered: Engagement as a primary framework for understanding educational and motivational outcomes. Measuring the malleability, complexity, multidimensionality, and sources of engagement. The relationship between engagement and achievement. Supporting and challenging: the instructor’s role in promoting engagement. Engagement within and beyond core academic subjects. Technological innovations on the engagement horizon. Optimal Learning Environments to Promote Student Engagement is an essential resource for researchers, professionals, and graduate students in child and school psychology; social work; educational psychology; positive psychology; family studies; and teaching/teacher education.




Reinforcement Learning for Optimal Feedback Control


Book Description

Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.




The Art of Learning


Book Description

An eight-time national chess champion and world champion martial artist shares the lessons he has learned from two very different competitive arenas, identifying key principles about learning and performance that readers can apply to their life goals. Reprint. 35,000 first printing.




Optimal Parenting


Book Description

This book instructs parents in how to create well-being in all stages of their children's lives. Combining compelling insights with practical applications based on 25 years of experience, Natural Learning Rhythms is poised to be the parenting style for cultural creatives.







Development of Professional Expertise


Book Description

Professionals such as medical doctors, aeroplane pilots, lawyers, and technical specialists find that some of their peers have reached high levels of achievement that are difficult to measure objectively. In order to understand to what extent it is possible to learn from these expert performers for the purpose of helping others improve their performance, we first need to reproduce and measure this performance. This book is designed to provide the first comprehensive overview of research on the acquisition and training of professional performance as measured by objective methods rather than by subjective ratings by supervisors. In this collection of articles, the world's foremost experts discuss methods for assessing the experts' knowledge and review our knowledge on how we can measure professional performance and design training environments that permit beginning and experienced professionals to develop and maintain their high levels of performance, using examples from a wide range of professional domains.




Reinforcement Learning, second edition


Book Description

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.




Computational Optimal Transport


Book Description

The goal of Optimal Transport (OT) is to define geometric tools that are useful to compare probability distributions. Their use dates back to 1781. Recent years have witnessed a new revolution in the spread of OT, thanks to the emergence of approximate solvers that can scale to sizes and dimensions that are relevant to data sciences. Thanks to this newfound scalability, OT is being increasingly used to unlock various problems in imaging sciences (such as color or texture processing), computer vision and graphics (for shape manipulation) or machine learning (for regression, classification and density fitting). This monograph reviews OT with a bias toward numerical methods and their applications in data sciences, and sheds lights on the theoretical properties of OT that make it particularly useful for some of these applications. Computational Optimal Transport presents an overview of the main theoretical insights that support the practical effectiveness of OT before explaining how to turn these insights into fast computational schemes. Written for readers at all levels, the authors provide descriptions of foundational theory at two-levels. Generally accessible to all readers, more advanced readers can read the specially identified more general mathematical expositions of optimal transport tailored for discrete measures. Furthermore, several chapters deal with the interplay between continuous and discrete measures, and are thus targeting a more mathematically-inclined audience. This monograph will be a valuable reference for researchers and students wishing to get a thorough understanding of Computational Optimal Transport, a mathematical gem at the interface of probability, analysis and optimization.