Analogical classification in formal grammar


Book Description

The organization of the lexicon, and especially the relations between groups of lexemes is a strongly debated topic in linguistics. Some authors have insisted on the lack of any structure of the lexicon. In this vein, Di Sciullo & Williams (1987: 3) claim that “[t]he lexicon is like a prison – it contains only the lawless, and the only thing that its inmates have in commonis lawlessness”. In the alternative view, the lexicon is assumed to have a rich structure that captures all regularities and partial regularities that exist between lexical entries.Two very different schools of linguistics have insisted on the organization of the lexicon. On the one hand, for theories like HPSG (Pollard & Sag 1994), but also some versions of construction grammar (Fillmore & Kay 1995), the lexicon is assumed to have a very rich structure which captures common grammatical properties between its members. In this approach, a type hierarchy organizes the lexicon according to common properties between items. For example, Koenig (1999: 4, among others), working from an HPSG perspective, claims that the lexicon “provides a unified model for partial regularties, medium-size generalizations, and truly productive processes”. On the other hand, from the perspective of usage-based linguistics, several authors have drawn attention to the fact that lexemes which share morphological or syntactic properties, tend to be organized in clusters of surface (phonological or semantic) similarity (Bybee & Slobin 1982; Skousen 1989; Eddington 1996). This approach, often called analogical, has developed highly accurate computational and non-computational models that can predict the classes to which lexemes belong. Like the organization of lexemes in type hierarchies, analogical relations between items help speakers to make sense of intricate systems, and reduce apparent complexity (Köpcke & Zubin 1984). Despite this core commonality, and despite the fact that most linguists seem to agree that analogy plays an important role in language, there has been remarkably little work on bringing together these two approaches. Formal grammar traditions have been very successful in capturing grammatical behaviour, but, in the process, have downplayed the role analogy plays in linguistics (Anderson 2015). In this work, I aim to change this state of affairs. First, by providing an explicit formalization of how analogy interacts with grammar, and second, by showing that analogical effects and relations closely mirror the structures in the lexicon. I will show that both formal grammar approaches, and usage-based analogical models, capture mutually compatible relations in the lexicon.




Head-Driven Phrase Structure Grammar


Book Description

Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism).




The Handbook of Lexical Functional Grammar


Book Description

Lexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches.




Headedness and/or grammatical anarchy?


Book Description

In most grammatical models, hierarchical structuring and dependencies are considered as central features of grammatical structures, an idea which is usually captured by the notion of “head” or “headedness”. While in most models, this notion is more or less taken for granted, there is still much disagreement as to the precise properties of grammatical heads and the theoretical implications that arise of these properties. Moreover, there are quite a few linguistic structures that pose considerable challenges to the notion of “headedness”. Linking to the seminal discussions led in Zwicky (1985) and Corbett, Fraser, & Mc-Glashan (1993), this volume intends to look more closely upon phenomena that are considered problematic for an analysis in terms of grammatical heads. The aim of this book is to approach the concept of “headedness” from its margins. Thus, central questions of the volume relate to the nature of heads and the distinction between headed and non-headed structures, to the process of gaining and losing head status, and to the thought-provoking question as to whether grammar theory could do without heads at all. The contributions in this volume provide new empirical findings bearing on phenomena that challenge the conception of grammatical heads and/or discuss the notion of head/headedness and its consequences for grammatical theory in a more abstract way. The collected papers view the topic from diverse theoretical perspectives (among others HPSG, Generative Syntax, Optimality Theory) and different empirical angles, covering typological and corpus-linguistic accounts, with a focus on data from German.




French subject islands


Book Description

This book examines extractions out of the subject, which is traditionally considered to be an island for extraction. There is a debate among linguists regarding whether the “subject island constraint” is a syntactic phenomenon or an illusion caused by cognitive or pragmatic factors. The book focusses on French, that provides an interesting case study because it allows certain extractions out of the subject despite not being a typical null-subject language. The book takes a discourse-based approach and introduces the “Focus-Background Conflict” constraint, which posits that a focused element cannot be part of a backgrounded constituent due to a pragmatic contradiction. The major novelty of this proposal is that it predicts a distinction between extractions out of the subject in focalizing and non-focalizing constructions. The central contribution of this book is to offer the detailed results of a series of empirical studies (corpus studies and experiments) on extractions out of the subject is French. These studies offer evidence for the possibility of extraction out of the subject in French. But they also reveal a clear distinction between constructions. While extractions out of the subject are common and highly acceptable in relative clauses, this is not the case for interrogatives and clefts. Finally, the book proposes a Head-Driven Phrase Structure Grammar (HPSG) analysis of subject islands. It demonstrates the interaction between information structure and syntax using a representation of information structure based on Minimal Recursion Semantics (MRS).




One-to-many-relations in morphology, syntax, and semantics


Book Description

The standard view of the form-meaning interfaces, as embraced by the great majority of contemporary grammatical frameworks, consists in the assumption that meaning can be associated with grammatical form in a one-to-one correspondence. Under this view, composition is quite straightforward, involving concatenation of form, paired with functional application in meaning. In this book, we discuss linguistic phenomena across several grammatical sub-modules (morphology, syntax, semantics) that apparently pose a problem to the standard view, mapping out the potential for deviation from the ideal of one-to-one correspondences, and develop formal accounts of the range of phenomena. We argue that a constraint-based perspective is particularly apt to accommodate deviations from one-to-many correspondences, as it allows us to impose constraints on full structures (such as a complete word or the interpretation of a full sentence) instead of deriving such structures step by step. Most of the papers in this volume are formulated in a particular constraint-based grammar framework, Head-driven Phrase Structure Grammar. The contributions investigate how the lexical and constructional aspects of this theory can be combined to provide an answer to this question across different linguistic sub-theories.




From fieldwork to linguistic theory


Book Description

Dan Everett is a renowned linguist with an unparalleled breadth of contributions, ranging from fieldwork to linguistic theory, including phonology, morphology, syntax, semantics, sociolinguistics, psycholinguistics, historical linguistics, philosophy of language, and philosophy of linguistics. Born on the U.S. Mexican border, Daniel Everett faced much adversity growing up and was sent as a missionary to convert the Pirahã in the Amazonian jungle, a group of people who speak a language that no outsider had been able to become proficient in. Although no Pirahã person was successfully converted, Everett successfully learned and studied Pirahã, as well as multiple other languages in the Americas. Ever steadfast in pursuing data-driven language science, Everett debunked generativist claims about syntactic recursion, for which he was repeatedly attacked. In addition to conducting fieldwork with many understudied languages and revolutionizing linguistics, Everett has published multiple works for the general public: "Don’t sleep, there are snakes, Language: The cultural tool, and how language began". This book is a collection of 15 articles that are related to Everett’s work over the years, released after a tribute event for Dan Everett that was held at MIT on June 8th 2023.




The semantics of English -ment nominalizations


Book Description

It is well-known that derivational affixes can be highly polysemous, producing a range of different, often related, meanings. For example, English deverbal nouns with the suffix -er can denote instruments (opener), agents (writer), locations (diner), or patients (loaner). It is commonly assumed that this polysemy arises through a compositional process in which the affix interacts with the semantics of the base. Yet, despite intensive research in recent years, a workable model for this interaction is still under debate. In order to study and model the semantic contributions of the base and of the affix, a framework is needed in which meanings can be composed and decomposed. In this book, I formalize the semantic input and output of derivation by means of frames, that is, recursive attribute-value structures that serve to model mental representations of concepts. In my approach, the input frame offers an array of semantic elements from which an affix may select to construct the derivative's meaning. The relationship between base and derivative is made explicit by integrating their respective frame-semantic representations into lexical rules and inheritance hierarchies. I apply this approach to a qualitative corpus study of the productive relationship between the English nominalizing suffix -ment and a semantically delimited set of verbal bases. My data set consists of 40 neologisms with base verbs from two semantic classes, namely change-of-state verbs and verbs of psychological state. I analyze 369 attestations which were elicited from various corpora with a purposeful sampling approach, and which were hand-coded using common semantic categories such as event, state, patient and stimulus. My results show that -ment can target a systematically restricted set of elements in the frame of a given base verb. It thereby produces a range of possible readings in each derivative, which becomes ultimately interpretable only within a specific context. The derivational process is governed by an interaction of the semantic elements provided by the base on the one hand, with properties of the affix (e.g. -ment's aversion to [+animate] readings) on the other. For instance, a shift from the verb annoy to a result-state reading in annoyment is possible because the input frame of verbs of psychological state offers a RESULT-STATE attribute, which, as is fixed in the inheritance hierarchy, is compatible with -ment. Meanwhile, a shift from annoy to an experiencer reading in annoyment fails because the value range of the attribute EXPERIENER is fixed to [+animate] entities, so that -ment's animacy constraint blocks the inheritance mechanism. Furthermore, a quantitative exploration of my data set reveals a likely blocking effect for some -ment readings. Thus, while I have found most expected combinations of nominalization and reading attested, there are pronounced gaps for readings like instrument or stimulus. Such readings are likely to be produced by standardly subject-denoting suffixes such as -er or -ant, which may reduce the probability for -ment derivation. The quantitative analysis furthermore shows that, within the subset of attested combinations, ambiguity is widespread, with 43% of all combinations of nominalization and reading being only attested ambiguously. This book shows how a derivational process acts on the semantics of a given verbal base by reporting on an in-depth qualitative study of the semantic contributions of both the base and the affix. Furthermore, it demonstrates that an explicit semantic decomposition of the base is essential for the analysis of the resulting derivative's semantics.




Paradigms regained: Theoretical and empirical arguments for the reassessment of the notion of paradigm


Book Description

The volume discusses the breadth of applications for an extended notion of paradigm. Paradigms in this sense are not only tools of morphological description but constitute the inherent structure of grammar. Grammatical paradigms are structural sets forming holistic, semiotic structures with an informational value of their own. We argue that as such, paradigms are a part of speaker knowledge and provide necessary structuring for grammaticalization processes. The papers discuss theoretical as well as conceptual questions and explore different domains of grammatical phenomena, ranging from grammaticalization, morphology, and cognitive semantics to modality, aiming to illustrate what the concept of grammatical paradigms can and cannot (yet) explain.




Russian verbal prefixation


Book Description

This book addresses the complexity of Russian verbal prefixation system that has been extensively studied but yet not explained. Traditionally, different meanings have been investigated and listed in the dictionaries and grammars and more recently linguists attempted to unify various prefix usages under more general descriptions. The existent semantic approaches, however, do not aim to use semantic representations in order to account for the problems of prefix stacking and aspect determination. This task has been so far undertaken by syntactic approaches to prefixation, that divide verbal prefixes in classes and limit complex verb formation by restricting structural positions available for the members of each class. I show that these approaches have two major drawbacks: the implicit prediction of the non-existence of complex biaspectual verbs and the absence of uniformly accepted formal criteria for the underlying prefix classification. In this book the reader can find an implementable formal semantic approach to prefixation that covers five prefixes: za-, na-, po-, pere-, and do-. It is shown how to predict the existence, semantics, and aspect of a given complex verb with the help of the combination of an LTAG and frame semantics. The task of identifying the possible affix combinations is distributed between three modules: syntax, which is kept simple (only basic structural assumptions), frame semantics, which ensures that the constraints are respected, and pragmatics, which rules out some prefixed verbs and restricts the range of available interpretations. For the purpose of the evaluation of the theory, an implementation of the proposed analysis for a grammar fragment using a metagrammar description is provided. It is shown that the proposed analysis delivers more accurate and complete predictions with respect to the existence of complex verbs than the most precise syntactic account.