Lexical Representations and the Semantics of Complementation


Book Description

First published in 1983, this book represents an effort to lay the groundwork for a general approach to lexical semantics that pays heed to the needs of a theory of discourse interpretation, a theory of compositional semantics, and a theory of lexical rules. The first chapter proposes a basic framework in which to undertake lexical description and a lexical semantic analogue to the classical syntactic distinction between subcategorized for complement and adjunct. This apparatus for lexical description is expanded in the second chapter. A theory of the semantics of nuclear terms along with a proposed implementation is presented in chapter three. The fourth chapter argues that a number of regular, semantically governed valence alternations could be captured in frame representations that give rise to various kinds of realisation options. The final chapter examines interaction of these phenomena with a general account of prediction or control along with the general framework of lexical representation.




Representation of Cognitive Structures


Book Description

Within the framework of cognitive grammar, investigates the distribution of infinitival and finite complements--indicative and subjunctive--in the French language, emphasizing the causation/perception, modal, conceptualizing subject and impersonal constructions. Presents a fairly large array of constructions that have received considerable attention in the literature, but rather than attempting comprehensiveness, seeks only to demonstrate that French complementation can be considered in a global fashion. Achard believes the findings can be applied to other languages as well. Annotation copyrighted by Book News, Inc., Portland, OR




Exploring Distributional Semantics in Lexical Representations and Narrative Modeling


Book Description

We are interested in the computational modeling of lexico-conceptual and narrative knowledge (e.g. how to represent the meaning of cat to reflect facts such as: it is similar to a dog, and it is typically larger than a mouse; how to characterize story, and how to identify different narratives on the same topic). On the lexico-conceptual front, we learn lexical representations with strong interpretability, as well as integrate commonsense knowledge into lexical representations. For narrative modeling, we study how to identify, extract, and generate narratives/stories acceptable to human intuition. As a methodological framework we apply the methods of Distributional Semantics (DS) — “a subfield of Natural Language Processing that learns meaning from word usages” (Herbelot, 2019) — where semantic representations (on any levels such as word, phrases, sentences, etc.) are learned at scale from data through machine learning models (Erk and Padó, 2008; Baroni and Lenci, 2010; Mikolov et al., 2013; Pennington et al., 2014). To infuse interpretability and commonsense into semantic representations (specifically lexical and event), which are typically lacking in previous works (Doran et al., 2017; Gusmao et al., 2018; Carvalho et al., 2019), we complement the data-driven scalability with a minimal amount of human knowledge annotation on a selected set of tasks and have obtained empirical evidence in support of our techniques. For narrative modeling, we draw insights from the rich body of work on scripts and narratives started from Schank and Abelson (1977) and Mooney and DeJong (1985) to Chambers and Jurafsky (2008, 2009), and proposed distributional models for the tasks narrative identification, extraction, and generation which produced state-of-the-art performance. Symbolic approaches to lexical semantics (Wierzbicka, 1996; Goddard and Wierzbicka, 2002) and narrative modeling (Schank and Abelson, 1977; Mooney and DeJong, 1985) have been fruitful on the front of theoretical studies. For example, in theoretical linguistics, Wierzbicka defined a small set of lexical semantic primitives from which complex meaning can be built compositionally; in Artificial Intelligence, Schank and Abelson formulated primitive acts which are conceptualized into semantic episodes (i.e. scripts) understandable by humans. Our focus, however, is primarily on computational approaches that need wide lexical coverage, for which DS provides a better toolkit, especially in practical applications. In this thesis, we innovate by building on the “vanilla” DS techniques (Landauer and Dumais, 1997; Mikolov et al., 2013) to address the issues listed above. Specifically, we present empirical evidence that • On the building block level, with the framework of DS, it is possible to learn highly interpretable lexical and event representations at scale and introduce human commonsense knowledge at low cost. • On the narrative level, well-designed DS modeling offers a balance of precision and scalability, solutions which are empirically stronger to complex narrative modeling questions (e.g. narrative identification, extraction and generation). Further, conducting case-studies on lexical and narrative modeling, we showcase the viability of integrating DS with traditional methods in complementation to retain the strengths of both approaches Concretely, the contributions of this thesis are summarized as follows: • Evidence from analyzing/modeling a small set of common concepts which indicates that interpretable representations can be learned for lexical concepts with minimal human annotation to realize one/few-shot learning. • Commonsense integration in lexical semantics: with carefully designed crowdsourcing, and combined with distributional methods, it is possible to substantially improve inference related to physical knowledge of the world. • Neural distributional methods perform strongly in complex narrative modeling tasks, where we demonstrate that the following techniques are particularly useful: 1) human intuition inspired iterative algorithms; 2) integration of graphical and distributional modeling; pre-trained large-scale language models




Semantics - Lexical Structures and Adjectives


Book Description

Discover vital research on the lexical and cognitive meanings of words. In this exciting book from a team of world-class researchers, in-depth articles explain a wide range of topics, including thematic roles, sense relation, ambiguity and comparison. The authors focus on the cognitive and conceptual structure of words and their meaning extensions such as coercion, metaphors and metonymies. The book features highly cited material – available in paperback for the first time since its publication – and is an essential starting point for anyone interested in lexical semantics, especially where it meets other cognitive and conceptual research.




Constructing a Lexicon of English Verbs


Book Description

Gives an account of the English verbal lexicon which not only systematizes the meanings of lexemes within a hierarchical framework, but also demonstrates the principled connections between meaning and highlights the syntactic complementation patterns of verbs and the patterns of conceptualization in the human mind. Explains lexical patterning and its relationship with meaning, syntax, and cognition.




Semantics - Theories


Book Description

Now in paperback for the first time since its original publication, the material gathered here is perfect for anyone who needs a detailed and accessible introduction to the important semantic theories. Designed for a wide audience, it will be of great value to linguists, cognitive scientists, philosophers, and computer scientists working on natural language. The book covers theories of lexical semantics, cognitively oriented approaches to semantics, compositional theories of sentence semantics, and discourse semantics. This clear, elegant explanation of the key theories in semantics research is essential reading for anyone working in the area.




Semantics


Book Description




Question-orientedness and the Semantics of Clausal Complementation


Book Description

This volume explores the compositional semantics of clausal complementation, and proposes a theory in which clause-embedding predicates are uniformly “question-oriented”, i.e., they take a set of propositions as their semantic argument. This theory opens up new horizons for the study of embedded questions and clausal complementation, and presents a successful case study on how lexical semantics interacts with syntax and compositional semantics. It offers new perspectives on issues in epistemology and the philosophy of language, such as the relationship between know-wh and know-that and the nature of attitudinal objects in general. Cross-linguistically, attitude predicates such as know, tell and surprise, can embed both declarative and interrogative clauses. Since these clauses are taken to represent different semantic objects, like propositions and questions, the embedding behavior of these predicates poses puzzles for the compositional semantics of clausal complementation. In addition, the fact that some verbs “select for” a certain complement type poses further challenges for compositional semantics. This volume addresses these issues based on a uniformly question-oriented analysis of attitude predicates, and proposes to derive their variable behaviors from their lexical semantics. The book is essential reading for linguists working on the syntax and semantics of clausal complementation, as well as those interested in the role of lexical semantics in compositional semantics. It will also be valuable for philosophers who are interested in applying linguistic tools to address philosophical problems.




English Verb Classes and Alternations


Book Description

In this rich reference work, Beth Levin classifies over 3,000 English verbs according to shared meaning and behavior. Levin starts with the hypothesis that a verb's meaning influences its syntactic behavior and develops it into a powerful tool for studying the English verb lexicon. She shows how identifying verbs with similar syntactic behavior provides an effective means of distinguishing semantically coherent verb classes, and isolates these classes by examining verb behavior with respect to a wide range of syntactic alternations that reflect verb meaning. The first part of the book sets out alternate ways in which verbs can express their arguments. The second presents classes of verbs that share a kernel of meaning and explores in detail the behavior of each class, drawing on the alternations in the first part. Levin's discussion of each class and alternation includes lists of relevant verbs, illustrative examples, comments on noteworthy properties, and bibliographic references. The result is an original, systematic picture of the organization of the verb inventory. Easy to use, English Verb Classes and Alternations sets the stage for further explorations of the interface between lexical semantics and syntax. It will prove indispensable for theoretical and computational linguists, psycholinguists, cognitive scientists, lexicographers, and teachers of English as a second language.




Naive Semantics for Natural Language Understanding


Book Description

This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations.