Author: Alec Marantz (Page 8 of 11)

Understanding sentences

Most approaches that try to relate linguistic knowledge to real time processing of sentences have considered phrase structure rules as a reasonable formalism for the hierarchical constituent structure of sentences. From the perspective of the history of generative linguistics, one can trace the importance of phrase structure rules to Chomsky’s argument in Syntactic Structures (1957) that our knowledge of language involves a hierarchical arrangement of words, phrases, and phrases containing phrases, rather than knowledge of a linear string of words (Marantz 2019). Introductory linguistics textbooks present sentence structure with familiar Sentence → Noun_Phrase Verb_Phrase rules, whose output is illustrated in labelled branching tree structures.

Image from Encyclopedia BritannicaNote the function of “node” labels in standard phrase structure rules. First, and quite importantly, a label like NP appears in more than one rule. In a textbook presentation of English grammar, NP would appear as sister to VP as the expansion of the S node, but also as sister to the Verb in the expansion of VP. The important generalization captured here is that English includes phrases whose internal structure doesn’t uniquely determine their position in a sentence. Inside an NP, we don’t know if we’re inside a subject or an object – the potentially infinite list of NPs generated by the grammar could appear in either position.

It’s true that some languages will distinguish noun phrases using case. In a canonical tensed transitive sentence, the subject might be obligatorily marked with nominative case and the object with accusative case. In languages like Russian, this case marking appears on the head noun of the noun phrase as well as on agreeing adjectives and other constituents. Importantly, however, case-marked noun phrases don’t appear in unique positions in sentences. If you’re inside a dative-marked noun phrase in Icelandic, for example, you don’t know whether you’re in the subject position, the object position, or some other hierarchical position in a sentence. Furthermore, case marking in general appears as if it’s been added “on top of” a noun phrase. That is, the internal structure of a noun phrase (the distribution of determiners, adjectives, etc.) is generally the same within, say, a dative and a nominative noun phrase. As far as phrase structure generalizations are concerned, then, a noun phrase is a noun phrase, no matter what position it is in and what case marking it has.

In phrase structure rules, a node label that appears on the right side of one rule, such as VP in (1), can appear on the left side of another rule (2) that describes the internal structure of that node. That is, a node label serves to connect a phrase’s external distribution and its internal structure.

(1) S → NP VP

(2) VP → V (NP) (NP) PP* (S)

where parentheses indicate optionality and * indicates any number of the category, including zero

The development of the “X-bar” theory of phrase structure captured the important insight that node labels themselves are not arbitrary with respect to syntactic theory. Nodes are labeled according to their internal structure, and the labels themselves consist of sets of features derived from a universal list. So a noun phrase is phrase with a noun as its “head.” More generally, there is some computation over the features of the constituents within a phrase that determines the features of the node at the top. And it’s these top node features that determine the distribution of the phrase within other phrases, since it is via these features that the node will be identified by the right side of phrase structure rules, which describe the distribution of phrases inside phrases. X-bar theory therefore provided a constrained template for phrase structure rules as well as a built-in relationship between the label of a node and its internal structure: phrases are labeled by their heads, so an XP has an X as its head.

We can describe linguists’ evolving understanding of phrase structure by reviewing a simplified history of the syntactic literature. In Aspects of the Theory of Syntax (1965), Chomsky observed an apparent difference between phrases that appear as the sister to a verb in a verb phrase and phrases that appear as the sister to the verb phrase. Individual verbs seemed to care about the category of their complements within the verb phrase, but they did not seem to specify the category of the sister to the verb phrase, the “subject” of the sentence. For example, some verbs like hit might be transitive, requiring a noun phrase, while verbs like give seem to require either two noun phrases or a noun phrase and a prepositional phrase. Chomsky suggested that the category “verb” could be further “subcategorized” into smaller categories according their complements. Verbs like hit would carry the subcategorization feature +[__NP], putting them in the subcategory of transitive verbs and indicating that they (must) appear as a sister to a noun phrase within the verb phrase. On the other hand, verbs did not seem to specify the category of the “subject” of the sentence, which could be a prepositional phrase or an entire sentence, for example. Instead, verbs might care about whether the subject is, say, animate – a semantic feature. Verbs, then, could “select” for the semantic category of their “specifier” (the sister to the verb phrase), while they would be “subcategorized” by (or “for”) the syntactic categories of the complements they take.

It was then observed that the phrase structure rule for a verb phrase, as in (2), could be generated as the combination of (i) a union of the subcategorization features of English verbs, and (ii) some ordering statements that could follow from general principles (e.g., noun phrases want to be next to the verb and sentential complements want to be at the end of the verb phrase). From this observation came the claim that phrase structure rules themselves do not specify the categories of constituents other than their heads. That is, the distribution of non-heads within phrases, as well as their order, would follow from principles independent of the phrase structure rules.

At this point, we give a shout out to Jane Grimshaw, who contributed foundational papers to the next two developments we’ll discuss. First, Grimshaw (1979, 1981) noted that verbs like ask seem to semantically take a “question” complement, but that this complement can take the syntactic form of either a sentence (ask what the time is) or a noun phrase “concealed question” (ask the time).  Other verbs, like wonder, allow the sentence complement but not the noun phrase (wonder what the time is, but not *wonder the time). Grimshaw suggests that semantic selection, like the selection for animate subjects Chomsky described in Aspects, must be distinct from selection for a syntactic category, i.e., subcategorization. She dubbed syntactic category selection “c-selection” and suggested an independence of c-selection and “s-selection” (selection for semantic category or features).

In responding to Grimshaw, David Pesetsky (1982) noted a missing cell in the table one gets by crossing the c-selection possibilities for verbs that s-select for questions. Although there are verbs like wonder that c-select for a sentence and not an NP, there are no verbs that c-select for an NP and not a sentence.

c-selection for sentence c-selection for NP s-selection for question
ask
wonder
*

Simplifying somewhat, based on this asymmetry, Pesetsky asks whether c-selection is necessary at all. Suppose verbs are only specified for s-selection. What, then, explains the distribution of concealed (NP) questions? Pesetsky notes that the distribution of noun phrases is constrained by what is called “case theory” – the need for noun phrases to be “licensed” by (abstract) case. So the ability to take a noun phrase complement is a special ability, case assignment, which is associated with the theory of case and the special status of noun phrases. By contrast, there is no parallel theory governing the syntactic distribution of embedded sentences. According to Pesetsky, then, verbs can be classified according to whether they s-select for questions. If they do, they will automatically be able to take sentential question complements, since complement sentences don’t require any extra special grammatical sauce. However, to take a concealed question, a verb must also be marked to assign case. The verb ask has this case-assigning property but the verb wonder doesn’t.

Perhaps, then, there is no c-selection – no “subcategorization features” – at all in grammar.  Rather, the range of complements to verbs (along with nouns, adjectives, and prepositions) and their order and distribution would be explained by other factors, such as “case theory.”

But is it really true that no syntactic elements are specified to want to combine with nouns, verbs or adjectives? While it seems possible maintain that Ns, Vs, Adjs, and Ps don’t c-select, it would seem that other heads, like “Tense” or “Determiner”, want particular categories as their complements. And here’s where Grimshaw’s second important contribution to phrase structure theory will come in next time – the concept of an “extended projection” of a lexical item, like a verb (Grimshaw 1991).

 

References

Chomsky, N. (1957). Syntactic Structures. Walter de Gruyter.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Grimshaw, J. (1979). Complement selection and the lexicon. Linguistic Inquiry 10(2): 279-326.

Grimshaw, J. (1981). Form, function and the language acquisition device. In C. L. Baker and J. J. McCarthy (eds.), The Logical Problem of Language Acquisition, 165-182. MIT Press.

Grimshaw, J. (1991). Extended projection. Brandeis University: Ms. (Also appeared in Grimshaw, J. (2005). Words and Structure. Stanford: CSLI).

Marantz, A. (2019). What do linguists do? The Silver Dialogues.

Pesetsky, D. (1982). Paths and Categories. MIT: PhD dissertation.

What does it mean to recognize a morphologically complex word?

Lexical access has been formalized in various bottom-up models of word recognition as the process of deciding, among all the words of a language, which word is being presented (orally or visually). In the auditory modality, thinking of the word as unfolding from beginning to end, phoneme by phoneme, the models imagine a search among all lexical items for the items beginning with the encountered phonemes. This “cohort” of lexical competitors consistent with the observed phonemes is winnowed down as more phonemes are heard, until the “uniqueness point” of the presented word, the point at which only a single item is left in the cohort. This final item is recognized as the word being heard.

/k/ /kl/ /klæ/ /klæʃ/
clash clash clash clash
clan clan clan  
cleave cleave  
car    
     

So, for apparently morphologically simple words, like cat, word recognition in the cohort-based view involves deciding which word, from a finite list provided by the grammar of the language, is being presented. For psycholinguistic processing models, we can assign a probability distribution over the finite list, perhaps based on corpus frequency, and, for auditory word recognition, we can compute the probability of each successive phoneme based on its probability distribution over the members of the cohort compatible with the input string of phonemes encountered at each point.

But what about multimorphemic words, either derived, like teacher, or inflected, like teaches? One approach to modelling the recognition of morphologically complex words would be to assume that the grammar provides structured representations of these words as consisting of multiple parts, “morphemes,” but that these structured representations join the list of monomorphemic words as potential members of the cohorts entertained by the listener/reader when confronted with a token of a word. For psycholinguistic models, the probability of these morphologically complex units can be estimated from their corpus frequency, as with apparently monomorphemic words like cat.

An immediate issue arises, at least for inflection, that we can recognize words that we haven’t heard before. (Here, we can delay the question of how the productivity, or lack thereof, of derivational morphology might figure into an approach to morphological processing that separates derivation and inflection. The relevant issues at this point can be illustrated with inflection, like tense, agreement, case and number morphology.) Erwin Chan from UPenn has quantified this aspect of inflectional morphology (Chan 2008). As you encounter more inflected forms as a learner and fill out the “paradigms” of noun and verb stems in your language, you also encounter more new stems with incomplete paradigms. Figure 4.5 from Chan’s dissertation shows that many of the expected inflected forms of Spanish verb lemmas are frequently unattested. This is known as the sparse data problem. For any amount of input data, some inflected forms of exemplified stems will be missing, requiring one to use one’s grammar to create these inflected forms when they are needed.

The sparse data problem shows that people must be able to process (produce and understand) words they haven’t heard or read before. But this might be not a real issue for word recognition if the list of words consistent with a grammar is finite. Speakers could use their grammars to pre-generate all the (potential) words of the language and place them in a list from which the cohort of potential candidates for recognition can be derived. 

The immediate problem with this approach involves the psycholinguistic processing models alluded to earlier. These models require a probability distribution over the members of a cohort, and this distribution is estimated on the basis of corpus statistics. But what is the probability associated with a novel word, one generated by the grammar but not yet encountered? If one follows this approach to word recognition, one can estimate the expected corpus frequency of a morphologically complex word generated by the grammar based on the frequency of the stem and other factors. Fruchter & Marantz (2015), for instance, estimate the surface frequency of a complex word composed of stem X and suffix Y, F(X+Y), as a function of stem frequency (F(X)), biphone transition probability (the probability of encountering the first two phonemes of the suffix, given the preceding two phonemes of the stem, BTP(Y|X)), and semantic coherence (a measure of semantic well-formedness for a complex word, SC(X,Y)).

On a “whole word” approach to lexical access, where morphologically complex words join morphologically simple words on a list of candidates for recognition from which cohorts are computed, a single measure of word expectancy related to corpus frequencies of words and stems is used to derive frequency distributions over candidate words as wholes and, in the case of auditory word recognition, upcoming phonemes. The expectation is that recognition of a morphologically complex word will be modulated by whole word corpus frequency in the same way as a monomorphemic word.

The experimental work from my lab over the past 20 years, as well as related research from other labs, has shown, however, that this whole word approach to morphologically complex word recognition makes the wrong predictions, both for early visual neural responses in the processing of orthographically presented morphologically complex words and in the phoneme surprisal responses in the processing of auditorily presented complex words. That is, being morphologically complex matters for processing at the earliest stages of recognition, a fact that is incompatible with at least existing whole word cohort models of word recognition. For example, the presentation of an orthographic stimulus (a letter string) elicits a neural response from what has been called the Visual Word Form Area (VWFA) at about 170ms. This response is not directly modulated by the corpus frequency of a morphologically complex word, as might be expected if these words were on a stored list with monomorphemic words, but by a variable that reflects the relative frequency of the stem and the affixes. Our experiments have found that the transition probability between the stem and the affixes is the best predictor of the 170ms response (Solomyak & Marantz 2010; Lewis, Solomyak & Marantz 2011; Fruchter, Stockall & Marantz 2013; Fruchter & Marantz 2015; Gwilliams & Marantz 2018).

For auditory word recognition, a neural response in the superior temporal auditory processing regions peaking between 100 and 150ms after the onset of a phoneme is modulated by “phoneme surprisal,” a measure of the expectancy of the phoneme given the prior phonemes and the probability distribution over the cohort of words compatible with the prior phonemes. Work from my lab has shown that putting whole morphologically complex words into the list over which cohorts are defined does not yield good predictors for phoneme surprisal for phonemes in morphological complex words (Ettinger, Linzen & Marantz 2014; Gwilliams & Marantz 2015). Gwilliams & Marantz (2015), for instance, observe a main effect of phoneme surprisal based on morphological decomposition in the superior temporal gyrus but no effect based on whole word, or linear, phoneme expectancy (Figure 3).

Figure 3. Correlation between morphological and linear measures of surprisal and neural activity in superior temporal gyrus

It is clear, then, that morphological structure feeds into the probability distribution over candidates for recognition to yield accurate measures of phoneme surprisal. However, we do not have a motivated model of how exactly this occurs. That is, although we have shown that morphological complexity matters in word recognition, we do not have a good model of how it matters.

In his dissertation, Yohei Oseki proposes to attack the issue of processing multimorphemic words by eliminating the distinction between word and sentence processing (Oseki 2018) – which is in any case a dubious categorical contrast given the insights of Distributed Morphology and related linguistic theories.

If morphologically complex words are “parsed” from beginning to end, even if presented all at once visually, then the mechanisms of recognizing and assigning structure to a morphologically complex word would be the same as the mechanisms of recognizing and assigning structure to sentences. The estimate of the “surprisal” of a complex word, then, would not be an estimate of the corpus frequency of the word but would be computed over the frequencies of the individual morphemes and over the “rules” or generalizations that define the syntactic structure of the word (see also Gwilliams & Marantz 2018). Work in sentence processing has been able to assign “surprisal” values to sentences in this way, and Oseki provides evidence that this approach yields surprisal estimates for visually presented words that correlate with the 170ms response from VWFA.

Oseki’s work, while promising, does raise a number of problems and issues to which we will return. However, it serves to connect word recognition directly with sentence processing, leading us to examine what we know and don’t know about the latter. In the essays to follow, we’ll detour through considerations of sentence recognition and of syntactic theory, with some side trips through aspects of phonology and of meaning, before returning to our initial question of identifying the best models of morphologically complex word recognition to pit against experimental data.

 

References

Chan, E. (2008). Structures and distributions in morphology learning. University of Pennsylvania: PhD dissertation.

Ettinger, A., Linzen, T., & Marantz, A. (2014). The role of morphology in phoneme prediction: Evidence from MEG. Brain and Language, 129, 14-23. 

Fruchter, J. & Marantz, A. (2015). Decomposition, Lookup, and Recombination: MEG evidence for the Full Decomposition model of complex visual word recognition. Brain and Language, 143, 81-96. 

Fruchter, J., Stockall, L., & Marantz, A. (2013). MEG masked priming evidence for form-based decomposition of irregular verbs. Frontiers in Human Neuroscience, 7, 798.

Gwilliams, L., & Marantz, A. (2015). Tracking non-linear prediction in a linear speech stream: Influence of morphological structure on spoken word recognition. Brain and Language, 147, 1-13.

Gwilliams, L., & Marantz, A. (2018). Morphological representations are extrapolated from morpho-syntactic rules. Neuropsychologia114, 77–87.

Lewis, G., Solomyak, O., & Marantz, A. (2011). The neural basis of obligatory decomposition of suffixed words. Brain and Language, 118(3), 118-127. 

Oseki, Y. (2018). Syntactic structures in morphological processing. New York University: PhD dissertation.

Solomyak, O., & Marantz, A. (2010). Evidence for early morphological decomposition in visual word recognition: A single-trial correlational MEG study. Journal of Cognitive Neuroscience, 22(9), 2042-2057.

Blogging Again

So, last semester I ran into some sleeping issues that ate up the time I might have spent blogging my morphology course. Since these issues are finally resolving, I hope to catch up and review some of what I learned last semester teaching the class. However, morphology continues this semester in a seminar I’m co-teaching with Stephanie Harves. Topic: Argument Structure and Morphology. While, inevitably, the seminar will end up discussing the biggies – causatives, applicatives, and nominalizations – we do intend to focus on particular issues to frame our reading of some current literature, particularly research of past and present students. Let me outline some of these issues in the interest of drumming up comments and suggestions from any blog-readers (and, yes, synthetic compounds should come up as well).

A key topic for the connections among syntax, morphology and argument structure is whether roots take arguments/complements, the alternative being that roots must be categorized as nouns, verbs, adjective or prepositions prior to the resulting head merging with complements. Here I see three major proposals, with some mixing and matching of subproposals across the three general lines of research. The approach associated with the Hale/Keyser position has roots in the position of arguments (and/or predicates) below a v/V head, into which they incorporate or amalgamate/coalesce. The second approach (H. Harley as a proponent) sees roots as argument-takers such that a vP would consist of a v and a rootP containing the internal arguments of the verb. The third approach, which I have been investigating lately, assumes that roots must be first-merged with a categorizing head (or another root) as an adjunct to that head such that any effects of a root on argument structure must be mediated by the categorizing head.

A related topic, one that we actually began the seminar with, although obliquely, is that of the nature of the categorizing heads n, v, and adj. We revisited a paper by Paul Kiparsky (Kiparsky, Paul. “Nominal verbs and transitive nouns: Vindicating lexicalism.” On looking into words (and beyond)(2017): 311) that argues against a view of the distinction between gerunds and complex event nominalizations that ties the difference simply to the height of attachment of a categorizing n head. On this view, the gerunds have the n head high, at least above voice and perhaps above aspect, while the complex event nominalizations have it low, say attached to the vP. Kiparsky’s main argument is that there is a sense of “noun” in which the event nominalizations are nouns but the gerunds aren’t. Although in English gerunds can take subjects with the possessive clitic (John’s running the race), and although in many languages the gerund itself may be a case-marked word that, morphophonologically, looks like a noun, Kiparsky argues that in English and other languages, the gerunds (which are part of the inflectional paradigm of a verb) don’t take adjectives or, for that matter, anything else associated with DP structure. He suggests, covertly following the analysis of Reuland (Reuland, E. J. (1983). Governing-ing. Linguistic Inquiry, 14(1), 101-136), that the gerund head has case attracting features but is not of category N/n.

We’ll be discussing Jim Wood’s recent book ms. on Icelandic nominalizations soon. Jim argues that Icelandic nominalizes verbs, not vPs, for complex event nominalizations. I think, given Kiparsky’s cross-linguistic insights about gerunds, this has to be true in general. That is, the category heads n, v and adj attach to categorized stems and/or to roots and begin the extended projection associated with their heads. Of course i*, then, must be treated as “derivational” in some sense, not part of the extended projection of a category head, since one can nominalize or verbalize or adjectivalize a structure with an i* (voice, etc.) head.

I’ll write more about these ideas soon, but in this context we must tip our hats to Hagit Borer, who argued that the category heads like n and v are really associated with the higher functional structure and not substantive morphemes. So, nouns are in some sense induced by higher DP type heads in a way that points to Kiparsky’s generalization about the distinction between gerunds (no DP structure) and complex event nominalizations (yes DP structure).

« Older posts Newer posts »

© 2024 NYU MorphLab

Theme by Anders NorenUp ↑