Machine Learning Proceedings 1991


Book Description

Machine Learning




Machine Learning Proceedings 1993


Book Description

Machine Learning Proceedings 1993




Machine Learning Proceedings 1994


Book Description

Machine Learning Proceedings 1994




Machine Learning Proceedings 1992


Book Description

Machine Learning Proceedings 1992




Machine Learning Proceedings 1995


Book Description

Machine Learning Proceedings 1995




Machine Learning: ECML-93


Book Description

This volume contains the proceedings of the Eurpoean Conference on Machine Learning (ECML-93), continuing the tradition of the five earlier EWSLs (European Working Sessions on Learning). The aim of these conferences is to provide a platform for presenting the latest results in the area of machine learning. The ECML-93 programme included invited talks, selected papers, and the presentation of ongoing work in poster sessions. The programme was completed by several workshops on specific topics. The volume contains papers related to all these activities. The first chapter of the proceedings contains two invited papers, one by Ross Quinlan and one by Stephen Muggleton on inductive logic programming. The second chapter contains 18 scientific papers accepted for the main sessions of the conference. The third chapter contains 18 shorter position papers. The final chapter includes three overview papers related to the ECML-93 workshops.




Machine Learning


Book Description

Multistrategy learning is one of the newest and most promising research directions in the development of machine learning systems. The objectives of research in this area are to study trade-offs between different learning strategies and to develop learning systems that employ multiple types of inference or computational paradigms in a learning process. Multistrategy systems offer significant advantages over monostrategy systems. They are more flexible in the type of input they can learn from and the type of knowledge they can acquire. As a consequence, multistrategy systems have the potential to be applicable to a wide range of practical problems. This volume is the first book in this fast growing field. It contains a selection of contributions by leading researchers specializing in this area. See below for earlier volumes in the series.




Reinforcement Learning


Book Description

Reinforcement learning is the learning of a mapping from situations to actions so as to maximize a scalar reward or reinforcement signal. The learner is not told which action to take, as in most forms of machine learning, but instead must discover which actions yield the highest reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward, but also the next situation, and through that all subsequent rewards. These two characteristics -- trial-and-error search and delayed reward -- are the most important distinguishing features of reinforcement learning. Reinforcement learning is both a new and a very old topic in AI. The term appears to have been coined by Minsk (1961), and independently in control theory by Walz and Fu (1965). The earliest machine learning research now viewed as directly relevant was Samuel's (1959) checker player, which used temporal-difference learning to manage delayed reward much as it is used today. Of course learning and reinforcement have been studied in psychology for almost a century, and that work has had a very strong impact on the AI/engineering work. One could in fact consider all of reinforcement learning to be simply the reverse engineering of certain psychological learning processes (e.g. operant conditioning and secondary reinforcement). Reinforcement Learning is an edited volume of original research, comprising seven invited contributions by leading researchers.




Computational Theories of Interaction and Agency


Book Description

Over time the field of artificial intelligence has developed an "agent perspective" expanding its focus from thought to action, from search spaces to physical environments, and from problem-solving to long-term activity. Originally published as a special double volume of the journal Artificial Intelligence, this book brings together fundamental work by the top researchers in artificial intelligence, neural networks, computer science, robotics, and cognitive science on the themes of interaction and agency. It identifies recurring themes and outlines a methodology of the concept of "agency." The seventeen contributions cover the construction of principled characterizations of interactions between agents and their environments, as well as the use of these characterizations to guide analysis of existing agents and the synthesis of artificial agents.Artificial Intelligence series.Special Issues of Artificial Intelligence




FGCS '92


Book Description