Knowledge Acquisition: Selected Research and Commentary


Book Description

What follows is a sampler of work in knowledge acquisition. It comprises three technical papers and six guest editorials. The technical papers give an in-depth look at some of the important issues and current approaches in knowledge acquisition. The editorials were pro duced by authors who were basically invited to sound off. I've tried to group and order the contributions somewhat coherently. The following annotations emphasize the connections among the separate pieces. Buchanan's editorial starts on the theme of "Can machine learning offer anything to expert systems?" He emphasizes the practical goals of knowledge acquisition and the challenge of aiming for them. Lenat's editorial briefly describes experience in the development of CYC that straddles both fields. He outlines a two-phase development that relies on an engineering approach early on and aims for a crossover to more automated techniques as the size of the knowledge base increases. Bareiss, Porter, and Murray give the first technical paper. It comes from a laboratory of machine learning researchers who have taken an interest in supporting the development of knowledge bases, with an emphasis on how development changes with the growth of the knowledge base. The paper describes two systems. The first, Protos, adjusts the training it expects and the assistance it provides as its knowledge grows. The second, KI, is a system that helps integrate knowledge into an already very large knowledge base.




Industrial and Engineering Applications of Artificial Intelligence and Expert Systems


Book Description

In the areas of industry and engineering, AI techniques have become the norm in sectors including computer-aided design, intelligent manufacturing, and control. Papers in this volume represent work by both computer scientists and engineers separately and together. They directly and indirectly represent a real collaboration between computer science and engineering, covering a wide variety of fields related to intelligent systems technology ranging from neural networks, knowledge acquisition and representation, automated scheduling, machine learning, multimedia, genetic algorithms, fuzzy logic, robotics, automated reasoning, heuristic searching, automated problem solving, temporal, spatial and model-based reasoning, clustering, blackboard architectures, automated design, pattern recognition and image processing, automated planning, speech recognition, simulated annealing, and intelligent tutoring, as well as various computer applications of intelligent systems including financial analysis, artificial




New Approaches To Knowledge Acquisition


Book Description

It is well recognized that knowledge acquisition is the critical bottleneck of knowledge engineering. This book presents three major approaches of current research in this field, namely the psychological approach, the artificial intelligence approach and the software engineering approach. Special attention is paid to the most recent advances in knowledge acquisition research, especially those made by Chinese computer scientists. A special chapter is devoted to its applications in other fields, e.g. language analysis, software engineering, computer-aided instruction, etc., which were done in China.




Organizational Intelligence and Knowledge Analytics


Book Description

Organizational Intelligence and Knowledge Analytics expands the traditional intelligence life cycle to a new framework - Design-Analyze-Automate-Accelerate - and clearly lays out the alignments between knowledge capital and intelligence strategies.




Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1


Book Description

The seventh Future Technologies Conference 2022 was organized in a hybrid mode. It received a total of 511 submissions from learned scholars, academicians, engineers, scientists and students across many countries. The papers included the wide arena of studies like Computing, Artificial Intelligence, Machine Vision, Ambient Intelligence and Security and their jaw- breaking application to the real world. After a double-blind peer review process 177 submissions have been selected to be included in these proceedings. One of the prominent contributions of this conference is the confluence of distinguished researchers who not only enthralled us by their priceless studies but also paved way for future area of research. The papers provide amicable solutions to many vexing problems across diverse fields. They also are a window to the future world which is completely governed by technology and its multiple applications. We hope that the readers find this volume interesting and inspiring and render their enthusiastic support towards it.




Explanation-Based Neural Network Learning


Book Description

Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. `The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.' From the Foreword by Tom M. Mitchell.




Multistrategy Learning


Book Description

Most machine learning research has been concerned with the development of systems that implememnt one type of inference within a single representational paradigm. Such systems, which can be called monostrategy learning systems, include those for empirical induction of decision trees or rules, explanation-based generalization, neural net learning from examples, genetic algorithm-based learning, and others. Monostrategy learning systems can be very effective and useful if learning problems to which they are applied are sufficiently narrowly defined. Many real-world applications, however, pose learning problems that go beyond the capability of monostrategy learning methods. In view of this, recent years have witnessed a growing interest in developing multistrategy systems, which integrate two or more inference types and/or paradigms within one learning system. Such multistrategy systems take advantage of the complementarity of different inference types or representational mechanisms. Therefore, they have a potential to be more versatile and more powerful than monostrategy systems. On the other hand, due to their greater complexity, their development is significantly more difficult and represents a new great challenge to the machine learning community. Multistrategy Learning contains contributions characteristic of the current research in this area.




Investigating Explanation-Based Learning


Book Description

Explanation-Based Learning (EBL) can generally be viewed as substituting background knowledge for the large training set of exemplars needed by conventional or empirical machine learning systems. The background knowledge is used automatically to construct an explanation of a few training exemplars. The learned concept is generalized directly from this explanation. The first EBL systems of the modern era were Mitchell's LEX2, Silver's LP, and De Jong's KIDNAP natural language system. Two of these systems, Mitchell's and De Jong's, have led to extensive follow-up research in EBL. This book outlines the significant steps in EBL research of the Illinois group under De Jong. This volume describes theoretical research and computer systems that use a broad range of formalisms: schemas, production systems, qualitative reasoning models, non-monotonic logic, situation calculus, and some home-grown ad hoc representations. This has been done consciously to avoid sacrificing the ultimate research significance in favor of the expediency of any particular formalism. The ultimate goal, of course, is to adopt (or devise) the right formalism.




Robot Learning


Book Description

Building a robot that learns to perform a task has been acknowledged as one of the major challenges facing artificial intelligence. Self-improving robots would relieve humans from much of the drudgery of programming and would potentially allow operation in environments that were changeable or only partially known. Progress towards this goal would also make fundamental contributions to artificial intelligence by furthering our understanding of how to successfully integrate disparate abilities such as perception, planning, learning and action. Although its roots can be traced back to the late fifties, the area of robot learning has lately seen a resurgence of interest. The flurry of interest in robot learning has partly been fueled by exciting new work in the areas of reinforcement earning, behavior-based architectures, genetic algorithms, neural networks and the study of artificial life. Robot Learning gives an overview of some of the current research projects in robot learning being carried out at leading universities and research laboratories in the United States. The main research directions in robot learning covered in this book include: reinforcement learning, behavior-based architectures, neural networks, map learning, action models, navigation and guided exploration.




Reinforcement Learning


Book Description

Reinforcement learning is the learning of a mapping from situations to actions so as to maximize a scalar reward or reinforcement signal. The learner is not told which action to take, as in most forms of machine learning, but instead must discover which actions yield the highest reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward, but also the next situation, and through that all subsequent rewards. These two characteristics -- trial-and-error search and delayed reward -- are the most important distinguishing features of reinforcement learning. Reinforcement learning is both a new and a very old topic in AI. The term appears to have been coined by Minsk (1961), and independently in control theory by Walz and Fu (1965). The earliest machine learning research now viewed as directly relevant was Samuel's (1959) checker player, which used temporal-difference learning to manage delayed reward much as it is used today. Of course learning and reinforcement have been studied in psychology for almost a century, and that work has had a very strong impact on the AI/engineering work. One could in fact consider all of reinforcement learning to be simply the reverse engineering of certain psychological learning processes (e.g. operant conditioning and secondary reinforcement). Reinforcement Learning is an edited volume of original research, comprising seven invited contributions by leading researchers.