New Developments in Parsing Technology


Book Description

Parsing can be defined as the decomposition of complex structures into their constituent parts, and parsing technology as the methods, the tools, and the software to parse automatically. Parsing is a central area of research in the automatic processing of human language. Parsers are being used in many application areas, for example question answering, extraction of information from text, speech recognition and understanding, and machine translation. New developments in parsing technology are thus widely applicable. This book contains contributions from many of today's leading researchers in the area of natural language parsing technology. The contributors describe their most recent work and a diverse range of techniques and results. This collection provides an excellent picture of the current state of affairs in this area. This volume is the third in a series of such collections, and its breadth of coverage should make it suitable both as an overview of the current state of the field for graduate students, and as a reference for established researchers.




Parsing Techniques


Book Description

This second edition of Grune and Jacobs’ brilliant work presents new developments and discoveries that have been made in the field. Parsing, also referred to as syntax analysis, has been and continues to be an essential part of computer science and linguistics. Parsing techniques have grown considerably in importance, both in computer science, ie. advanced compilers often use general CF parsers, and computational linguistics where such parsers are the only option. They are used in a variety of software products including Web browsers, interpreters in computer devices, and data compression programs; and they are used extensively in linguistics.




Trends in Parsing Technology


Book Description

Computer parsing technology, which breaks down complex linguistic structures into their constituent parts, is a key research area in the automatic processing of human language. This volume is a collection of contributions from leading researchers in the field of natural language processing technology, each of whom detail their recent work which includes new techniques as well as results. The book presents an overview of the state of the art in current research into parsing technologies, focusing on three important themes: dependency parsing, domain adaptation, and deep parsing. The technology, which has a variety of practical uses, is especially concerned with the methods, tools and software that can be used to parse automatically. Applications include extracting information from free text or speech, question answering, speech recognition and comprehension, recommender systems, machine translation, and automatic summarization. New developments in the area of parsing technology are thus widely applicable, and researchers and professionals from a number of fields will find the material here required reading. As well as the other four volumes on parsing technology in this series this book has a breadth of coverage that makes it suitable both as an overview of the field for graduate students, and as a reference for established researchers in computational linguistics, artificial intelligence, computer science, language engineering, information science, and cognitive science. It will also be of interest to designers, developers, and advanced users of natural language processing systems, including applications such as spoken dialogue, text mining, multimodal human-computer interaction, and semantic web technology.




Recent Advances in Parsing Technology


Book Description

In Marcus (1980), deterministic parsers were introduced. These are parsers which satisfy the conditions of Marcus's determinism hypothesis, i.e., they are strongly deterministic in the sense that they do not simulate non determinism in any way. In later work (Marcus et al. 1983) these parsers were modified to construct descriptions of trees rather than the trees them selves. The resulting D-theory parsers, by working with these descriptions, are capable of capturing a certain amount of ambiguity in the structures they build. In this context, it is not clear what it means for a parser to meet the conditions of the determinism hypothesis. The object of this work is to clarify this and other issues pertaining to D-theory parsers and to provide a framework within which these issues can be examined formally. Thus we have a very narrow scope. We make no ar guments about the linguistic issues D-theory parsers are meant to address, their relation to other parsing formalisms or the notion of determinism in general. Rather we focus on issues internal to D-theory parsers themselves.




Advances in Probabilistic and Other Parsing Technologies


Book Description

Parsing technology is concerned with finding syntactic structure in language. In parsing we have to deal with incomplete and not necessarily accurate formal descriptions of natural languages. Robustness and efficiency are among the main issuesin parsing. Corpora can be used to obtain frequency information about language use. This allows probabilistic parsing, an approach that aims at both robustness and efficiency increase. Approximation techniques, to be applied at the level of language description, parsing strategy, and syntactic representation, have the same objective. Approximation at the level of syntactic representation is also known as underspecification, a traditional technique to deal with syntactic ambiguity. In this book new parsing technologies are collected that aim at attacking the problems of robustness and efficiency by exactly these techniques: the design of probabilistic grammars and efficient probabilistic parsing algorithms, approximation techniques applied to grammars and parsers to increase parsing efficiency, and techniques for underspecification and the integration of semantic information in the syntactic analysis to deal with massive ambiguity. The book gives a state-of-the-art overview of current research and development in parsing technologies. In its chapters we see how probabilistic methods have entered the toolbox of computational linguistics in order to be applied in both parsing theory and parsing practice. The book is both a unique reference for researchers and an introduction to the field for interested graduate students.







Handbook of Natural Language Processing


Book Description

The Handbook of Natural Language Processing, Second Edition presents practical tools and techniques for implementing natural language processing in computer systems. Along with removing outdated material, this edition updates every chapter and expands the content to include emerging areas, such as sentiment analysis.New to the Second EditionGreater




Parsing Schemata for Practical Text Analysis


Book Description

The book presents a wide range of recent research results about parsing schemata, introducing formal frameworks and theoretical results while keeping a constant focus on applicability to practical parsing problems. The first part includes a general introduction to the parsing schemata formalism that contains the basic notions needed to understand the rest of the parts. Thus, this compendium can be used as an introduction to natural language parsing, allowing postgraduate students not only to get a solid grasp of the fundamental concepts underlying parsing algorithms, but also an understanding of the latest developments and challenges in the field. Researchers in computational linguistics will find novel results where parsing schemata are applied to current problems that are being actively researched in the computational linguistics community (like dependency parsing, robust parsing, or the treatment of non-projective linguistics phenomena). This book not only explains these results in a more detailed, comprehensive and self-contained way, and highlights the relations between them, but also includes new contributions that have not been presented.




Advances in Natural Multimodal Dialogue Systems


Book Description

The main topic of this volume is natural multimodal interaction. The book is unique in that it brings together a great many contributions regarding aspects of natural and multimodal interaction written by many of the important actors in the field. Topics addressed include talking heads, conversational agents, tutoring systems, multimodal communication, machine learning, architectures for multimodal dialogue systems, systems evaluation, and data annotation.




The Integration of Phonetic Knowledge in Speech Technology


Book Description

Continued progress in Speech Technology in the face of ever-increasing demands on the performance levels of applications is a challenge to the whole speech and language science community. Robust recognition and understanding of spontaneous speech in varied environments, good comprehensibility and naturalness of expressive speech synthesis are goals that cannot be achieved without a change of paradigm. This book argues for interdisciplinary communication and cooperation in problem-solving in general, and discusses the interaction between speech and language engineering and phonetics in particular. With a number of reports on innovative speech technology research as well as more theoretical discussions, it addresses the practical, scientific and sometimes the philosophical problems that stand in the way of cross-disciplinary collaboration and illuminates some of the many possible ways forward. Audience: Researchers and professionals in speech technology and computational linguists.