Polysemy


Book Description

This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning or function, sometimes perhaps related (as in plain) and sometimes perhaps not (as in bank). Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book–leading researchers in theoretical and computational linguistics–consider the implications of these problems for grammatical theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation–how intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole, the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.




Python Natural Language Processing


Book Description

Leverage the power of machine learning and deep learning to extract information from text data About This Book Implement Machine Learning and Deep Learning techniques for efficient natural language processing Get started with NLTK and implement NLP in your applications with ease Understand and interpret human languages with the power of text analysis via Python Who This Book Is For This book is intended for Python developers who wish to start with natural language processing and want to make their applications smarter by implementing NLP in them. What You Will Learn Focus on Python programming paradigms, which are used to develop NLP applications Understand corpus analysis and different types of data attribute. Learn NLP using Python libraries such as NLTK, Polyglot, SpaCy, Standford CoreNLP and so on Learn about Features Extraction and Feature selection as part of Features Engineering. Explore the advantages of vectorization in Deep Learning. Get a better understanding of the architecture of a rule-based system. Optimize and fine-tune Supervised and Unsupervised Machine Learning algorithms for NLP problems. Identify Deep Learning techniques for Natural Language Processing and Natural Language Generation problems. In Detail This book starts off by laying the foundation for Natural Language Processing and why Python is one of the best options to build an NLP-based expert system with advantages such as Community support, availability of frameworks and so on. Later it gives you a better understanding of available free forms of corpus and different types of dataset. After this, you will know how to choose a dataset for natural language processing applications and find the right NLP techniques to process sentences in datasets and understand their structure. You will also learn how to tokenize different parts of sentences and ways to analyze them. During the course of the book, you will explore the semantic as well as syntactic analysis of text. You will understand how to solve various ambiguities in processing human language and will come across various scenarios while performing text analysis. You will learn the very basics of getting the environment ready for natural language processing, move on to the initial setup, and then quickly understand sentences and language parts. You will learn the power of Machine Learning and Deep Learning to extract information from text data. By the end of the book, you will have a clear understanding of natural language processing and will have worked on multiple examples that implement NLP in the real world. Style and approach This book teaches the readers various aspects of natural language Processing using NLTK. It takes the reader from the basic to advance level in a smooth way.




The Semantics of Polysemy


Book Description

This book, addressed primarily to students and researchers in semantics, cognitive linguistics, English, and Australian languages, is a comparative study of the polysemy patterns displayed by percussion/impact ('hitting') verbs in English and Warlpiri (Pama-Nyungan, Central Australia). The opening chapters develop a novel theoretical orientation for the study of polysemy via a close examination of two theoretical traditions under the broader cognitivist umbrella: Langackerian and Lakovian Cognitive Semantics and Wierzbickian Natural Semantic Metalanguage. Arguments are offered which problematize attempts in these traditions to ground the analysis of meaning either in cognitive or neurological reality, or in the existence of universal synonymy relations within the lexicon. Instead, an interpretative rather than a scientific construal of linguistic theorizing is sketched, in the context of a close examination of certain key issues in the contemporary study of polysemy such as sense individuation, the role of reference in linguistic categorization, and the demarcation between metaphor and metonymy. The later chapters present a detailed typology of the polysemous senses of English and Warlpiri percussion/impact (or P/I) verbs based on a diachronically deep corpus of dictionary citations from Middle to contemporary English, and on a large corpus of Warlpiri citations. Limited to the operations of metaphor and of three categories of metonymy, this typology posits just four types of basic relation between extended and core meanings. As a result, the phenomenon of polysemy and semantic extension emerges as amenable to strikingly concise description.




Polysemy


Book Description

About fifty years ago, Stephen Ullmann wrote that polysemy is 'the pivot of semantic analysis'. Fifty years on, polysemy has become one of the hottest topics in linguistics and in the cognitive sciences at large. The book deals with the topic from a wide variety of viewpoints. The cognitive approach is supplemented and supported by diachronic, psycholinguistic, developmental, comparative, and computational perspectives. The chapters, written by some of the most eminent specialists in the field, are all underpinned by detailed discussions of methodology and theory.




Controlled Natural Language


Book Description

This book constitutes the thoroughly refereed post-workshop proceedings of the Workshop on Controlled Natural Language, CNL 2009, held in Marettimo Island, Italy, in June 2009. The 16 revised full papers presented together with 1 invited lecture were carefully reviewed and selected during two rounds of reviewing and improvement from 31 initial submissions. The papers are roughly divided into the two groups language aspects and tools and applications. Note that some papers fall actually into both groups: using a controlled natural language in an application domain often requires domain-specific language features.




Embeddings in Natural Language Processing


Book Description

Embeddings have undoubtedly been one of the most influential research areas in Natural Language Processing (NLP). Encoding information into a low-dimensional vector representation, which is easily integrable in modern machine learning models, has played a central role in the development of NLP. Embedding techniques initially focused on words, but the attention soon started to shift to other forms: from graph structures, such as knowledge bases, to other types of textual content, such as sentences and documents. This book provides a high-level synthesis of the main embedding techniques in NLP, in the broad sense. The book starts by explaining conventional word vector space models and word embeddings (e.g., Word2Vec and GloVe) and then moves to other types of embeddings, such as word sense, sentence and document, and graph embeddings. The book also provides an overview of recent developments in contextualized representations (e.g., ELMo and BERT) and explains their potential in NLP. Throughout the book, the reader can find both essential information for understanding a certain topic from scratch and a broad overview of the most successful techniques developed in the literature.




Lexical Meaning in Context


Book Description

This is a book about the meanings of words and how they can combine to form larger meaningful units, as well as how they can fail to combine when the amalgamation of a predicate and argument would produce what the philosopher Gilbert Ryle called a 'category mistake'. It argues for a theory in which words get assigned both an intension and a type. The book develops a rich system of types and investigates its philosophical and formal implications, for example the abandonment of the classic Church analysis of types that has been used by linguists since Montague. The author integrates fascinating and puzzling observations about lexical meaning into a compositional semantic framework. Adjustments in types are a feature of the compositional process and account for various phenomena including coercion and copredication. This book will be of interest to semanticists, philosophers, logicians and computer scientists alike.




Computational Lexical Semantics


Book Description

Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, and computer algorithms and architecture. Research programs whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.




Text, Speech and Dialogue


Book Description

This volume contains the Proceedings of the 7th International Conference on Text, Speech and Dialogue, held in Brno, Czech Republic, in September 2004, under the auspices of the Masaryk University. This series of international conferences on text, speech and dialogue has come to c- stitute a major forum for presentation and discussion, not only of the latest developments in academic research in these ?elds, but also of practical and industrial applications. Uniquely, these conferences bring together researchers from a very wide area, both intellectually and geographically, including scientists working in speech technology, dialogue systems, text processing, lexicography, and other related ?elds. In recent years the conference has dev- oped into aprimary meetingplacefor speech and languagetechnologistsfrom manydifferent parts of the world and in particular it has enabled important and fruitful exchanges of ideas between Western and Eastern Europe. TSD 2004 offered a rich program of invited talks, tutorials, technical papers and poster sessions, aswellasworkshops andsystemdemonstrations. Atotalof78paperswereaccepted out of 127 submitted, contributed altogether by 190 authors from 26 countries. Our thanks as usual go to the Program Committee members and to the external reviewers for their conscientious and diligent assessment of submissions, and to the authors themselves for their high-quality contributions. We would also like to take this opportunity to express our appreciation to all the members of the Organizing Committee for their tireless efforts in organizing the conference and ensuring its smooth running.




Metasemantics


Book Description

Metasemantics comprises new work on the philosophical foundations of linguistic semantics, by a diverse group of established and emerging experts in the philosophy of language, metaphysics, and the theory of content. The science of semantics aspires to systematically specify the meanings of linguistic expressions in context. The paradigmatic metasemantic question is accordingly: what more basic or fundamental features of the world metaphysically determine these semantic facts? Efforts to answer this question inevitably raise others. Where are the boundaries of semantics? What is the essence of the meaning relation? Which framework should we use for semantic theorizing? What are the intrinsic natures of semantic values? Are the semantic facts metaphysically determinate? What is semantic competence? Metasemantic inquiry has long been recognized as a central part of the philosophy of language, but recent developments in metaphysics and semantics itself now allow us to approach these classic questions with an unprecedented degree of precision. The essays collected here provide promising new perspectives on old problems, pose questions that suggest novel research projects, and taken together, greatly sharpen our understanding of linguistic representation.