Phonology, phonology, phonology

The last post re-examining Pinker’s Words and Rules framework left open the question of the linguistic representation of “irregularity.” What in fact do speakers know about, e.g., the distribution of suffixes like the phonologically zero past tense ending (on, say, the past tense of fit) or the restricted nominalizer –ity? Tied to this question is whether “regular” affixes (like the English /d/ past tense or /z/ plural) enjoy any special representational status. Within Distributed Morphology (DM), this last question involves the potential existence and status of “default” phonological forms (Vocabulary Items) for morphemes, forms that are inserted to realize feature bundles without contextual restrictions.

At their core, irregular affixes involve lists – in particular, lists of environments in which they occur. For example, we have suggested that –ity attaches to adjective suffixes –al and –able, as well as a list of stems that includes sane and vain. Similarly, the irregular past tense forms of English implicate lists of verbs that exhibit one or another of several irregular patterns.

A crucial insight about lists, made explicit in work by Maria Gouskova (e.g. Gouskova et al. 2015), is that lists of forms are sublexicons (subsets of the lexicon) and as such are amenable to phonological analysis. That is, to the extent that the phonotactics (generalizations about possible sounds and sequences of sounds) of a language is computed over the lexicon, each list involved in “irregular” morphology should generate a phonotactic grammar summarizing phonological generalizations over members of the list. When new forms are presented to the learner, they can be evaluated with respect to the phonology of the “irregular” lists to determine the likelihood that they will also be subject to the relevant irregularities (should the nominalized form of a new adjective prane be pranity?).

Although not explained or implemented in precisely these terms, the work of Albright and Hayes (2003) on the acquisition of the English past tense also involves the learner considering the phonological form of stems in creating possible (irregular and regular) rules of past tense formation. Their research argues that any learning mechanism that is searching for phonological grounding of morphological rules (for them, the phonology of the input to the rule) will learn generalizations about the distribution of apparent regular (or default) morphology alongside that of the irregulars. For the English past tense, this means that speakers will learn that certain stem endings will reliably take the regular /d/ past tense, while for other phonological forms of stems, the use of the regular is less certain. They present behavioral evidence in favor of speakers’ representation of phonological “islands of reliability” for both regular and irregular past tense endings. This brings into question the relevance of the notion of a “default” ending, since regulars and irregulars are learned and implemented by the same phonologically sensitive mechanisms. In other words, regulars may be no more default than irregulars. A nice aphasia study supporting this conclusion about regulars can be found in Rimikis and Buchwald (2019).

I will return in a later post to some of the many important implications of Gouskova’s work for morphological processing. Here I would just like to emphasize the importance of the phonology of lists for generating predictions about the processing of morphologically complex words that go beyond the frequency of stems and of transition probabilities. In recognizing the word sanity, for example, it might matter how well sane obeys the phonotactics of the –ity list. In an information theoretic sense, the worse the fit of the stem to the list, the more information is being conveyed by the word. In addition, Gouskova proposes that the outputs of affixation are also evaluated by the phonology, meaning all the –ity nouns are also on a list governed by its own phonological grammar. So, in recognizing sanity, the well-formedness of sanity with respect to the list of –ity nouns might also matter.

As a last plea for researchers interested in morphological processing to pay crucial attention to what the phonologists are saying, I would like to point you to a recent paper on German plurals by Jochen Trommer (2020). Trommer presents a phonological analysis of the various German plural formations. In the Words and Rules approach, the –s plural is considered “regular” (in the sense of default), while the various other plural endings with or without stem umlaut are considered learned irregulars. Although his phonological assumptions are a bit funky, Trommer shows how to analyze all but the –s plural as regular, based on the phonology (and gender) of the stem. Here are the main generalizations captured by his analysis, taken from his example (20):

Generalization I: Umlaut and plural -n are in complementary distribution

Generalization II: Feminine nouns strongly prefer plural -n in contexts where non-feminine nouns do not

Generalization III: Noun roots ending in …ə always take plural -n (and consequently never show umlaut)

Generalization IV: Nouns with plural -er always umlaut

The plural –s, by contrast, goes on a list of borrowed stems. In sense, then, the plural –s is an irregular plural. One can imagine a phonological grammar for these stems in Gouskova’s terms, so that there is no real “default” –s. Or, perhaps there is room in linguistic theory for defaults, properly described. But at the moment, morpho-phonology is better equipped to handle morphology without defaults than a theory with them.

In sum, the representation of so-called “irregularity” thoroughly implicates the phonology, and phonology may provide the variables that explain modulation in brain responses during the processing of morphologically complex words that go beyond the stem and transition probability variables we’ve been considering.

 

References

Albright, A., & Hayes, B. (2003). Rules vs. analogy in English past tenses: A computational/ experimental study. Cognition90(2), 119-161.

Gouskova, M., Newlin-Łukowicz, L., & Kasyanenko, S. (2015). Selectional restrictions as phonotactics over sublexicons. Lingua167, 41-81.

Rimikis, S., & Buchwald, A. (2019). The impact of morphophonological patterns on verb production: evidence from acquired morphological impairment. Clinical linguistics & phonetics33(1-2), 68-94.

Trommer, J. (2020). The subsegmental structure of German plural allomorphy. Natural Language & Linguistic Theory, 1-56.

22 Comments

  1. Omer Preminger

    Two questions:

    1. Would you endorse similar skepticism of the idea that there are default (=context-free) meanings associated with terminals / bundles of features? From where I sit, there is almost nothing about the insertion of phonological material that doesn’t also apply to the insertion of semantic material, and vice versa. (One possible exception: the sizes of the relevant domains, especially as it concerns so-called “idioms” vs. so-called “irregulars”.) If what’s good for the phonological goose is good for the semantic gander, how comfortable would you then be with the claim that there no notion of a default meaning for a terminal / feature-bundle, either?

    2. Granting that one can get experimental subjects to produce “plung” as the past-tense of “pling”, it’s not clear to me how the inference to competence (e.g. both “regulars” and “irregulars” are rule/generalization-based) then proceeds. After all one can also get experimental subjects to rate 4 as “a more even number” than 806…

    Full disclosure: I’d be happy with the conclusion that context-free spellouts (to either interface) enjoy no special status. It would, for example, furnish very straightforward explanations for why “cahoot”, “fangle”, “whelm”, “drib” and “shrift” lack any context-free meaning (at least in contemporary English) yet participate in meaningful larger expressions whose syntactic structure is entirely unremarkable. Nevertheless, I’d like to understand the argument against ‘defaults’ from “regulars” vs. “irregulars” a little better.

  2. Alec Marantz

    Thanks for the questions! Keep them coming!

    1. Yes, similar skepticism about default meanings. Note, though, that we’re talking about skepticism — I’m entirely open to the possibility that there are defaults in both domains. And the topic of this coming Tuesday’s post is contextual allosemy and idioms, so I’ll be highlighting this question of different locality domains. (Isn’t it “sauce for the goose”?)

    2. I agree that it’s difficult to draw any straight forward conclusions from Wug experiments. Note in particular that subjects rarely produce irregulars for Wug words, and behavior across subjects is quite variable. The experimental indications that regulars are like irregulars are in the behavior of subjects with regulars, and evidence e.g. for “islands of reliability” for regulars. The paper I cite by Rimikis, S., & Buchwald, A. (2019) provides additional data from a morphological aphasic leading to the same conclusion, and also gives a really nice summary of the issues.

    To rephrase the broader point: “Irregular vs. Regular” is a descriptive division understood in a particular way when applied to English past tense and plural forms. The broader theoretical issue is the knowledge of speakers about forms where the inflection or derivation depends in some way on the features or identity of the stem. As Trommer and Gouskova make clear, speakers’ knowledge about affixation depends on phonological knowledge about sets of stems. Within the context of this improved understanding of allomorphy, we can ask again, do morphemes have default phonological realizations? Along with investigation of “islands of reliability,” this question also hinges on the proper treatment of apparent “paradigm gaps,” which should not exist on the simplest theory of defaults.

    • Omer Preminger

      Thanks for the responses! I very much agree about “paradigm gaps”, and here too I think the parallels between PF and LF are very suggestive. (The LF counterpart being “cahoot” et alia.)

    • Luke Adamson

      Reactions to this exchange, but also more general reactions:

      Sorry for the self-reference, but: there’s some discussion of English participles and past tense forms in my dissertation relevant to the question of defaults, paradigm gaps, and morphophonology (pp. 202–213). In brief, I don’t think paradigm gaps have to result from the non-existence of underspecified exponents; a crashing morphophonological analysis fares better when independent evidence suggests that the morphemes in question have underspecified realizations. The way I analyze the stride gap is with implicational morphophonology from shared feature representation between past tense and participles, which results in a gap when the morphophonology comes up empty; similar formulations are conceivable. This happens in spite of the existence of a default realization of both the root (stride) and the participial node (-ed). Charles has his own analysis of the stride gap in his book.

      There’s still the possibility that there can be a lack of a “default” exponent or interpretation (caboodle — sure), and this might be a better analysis for other gaps. For what it’s worth, this seems like a natural extension of DM’s treatment of allomorphy, anyway (though maybe I think this just because I’ve read e.g. Harley 2014, Arregi and Nevins 2014). In my own interpretation of the DM literature, “default” exponents weren’t a privileged type of object anyway, at least for researchers who adopted the Halle 1997 version of the Subset Principle as the way of doing competition for insertion. As far as I understand it, “default” is merely a convenient way of talking about things. It’s really a matter of what the contextual specifications are: a “default” past tense exponent is specified for something like T[+past], even if you want to say that the distribution is heterogenous with respect to other properties. The Subset Principle formulation in Halle 1997:128 makes no mention of defaults — the least specified exponent may be called the “elsewhere” item in the paper, but according to the principle, an exponent still has to have a subset of the features of the morpheme in order to be inserted. (Admittedly, some statements of the Elsewhere Condition in the literature have privileged the elsewhere item, though maybe not always with strong commitment.)

      In any case, it is definitely worth more seriously considering how to tease apart the two options of i) complementary specifications of two or more exponents vs. ii) exponents with specified contexts and exponents with less specified contexts. For many cases, I still think heterogeneity of distribution makes (ii) a better option. But just because (ii) exists doesn’t imply that all contexts are always covered, so you can still get gaps.

      I’ll have to look at the Albright and Hayes work — and think more about Maria Gouskova’s — but from the description, I’m not sure why, in terms of representation, it should matter that one learns that certain verbs take what ends up being (in the adult grammar) the default past tense form rather than an irregular form. (I get how this would matter for acquisition if you list items until a productive pattern emerges.) I’m fine with the idea that speakers make morphophonological generalizations about lists, but it seems to me that this isn’t fundamentally at odds with the idea that speakers can have exponents with little/no contextual restrictions that apply in the absence of either lexically specific context or MP context. If speakers make generalizations about sublists of items that take the “default”, I guess it’s an issue of how MP generalizations are encoded. But distributional tendencies don’t necessarily have to be part of the contexts for insertion of Vocabulary Items.

      • Omer Preminger

        @Luke: It’s not clear to me that one could have assimilated the treatment of the ‘stride’ PF-gap to the ‘cahoot’ LF-gap anyway, since what’s “missing” in the ‘cahoot’ case is the elsewhere/context-free insertion rule, whereas it’s not usually thought that the participle is the elsewhere/context-free member of the simple past vs. participle opposition. I was thinking more like, e.g., how to distinguish modal ‘have to’ (which has both finite and nonfinite forms) from modal ‘must’ (which lacks a nonfinite form). It’s rather easy to argue that, in English, nonfinite is what you call a “heterogeneous distribution”. And so this lends itself to an analysis as the absence of an elsewhere (or underspecified) PF rule for ‘must’, akin to the absence of an elsewhere (or underspecified) LF rule for ‘cahoot’.

        • Luke Adamson

          Sure — I didn’t mean to suggest that we should try to attribute “cahoot” gaps to an MP parallel (though now that you mention it, it’s interesting to think about what the MP parallel would look like). I just meant to express that I don’t think a theory that has underspecified realization is incompatible with paradigm gaps in places where we expect the categories in question to have default exponents. (I guess this would be unexpected for a theory in which defaults are supposed to “rescue” derivations no matter what.)

          To be clear, I think there are real examples of e.g. defective root distributions that are amenable to the PF no-default analysis. It’s less clear to me that the incompatibility of English modals with nonfinite environments should be attributed to the Vocabulary (“must” and “have to” have different syntactic properties, too), but that’s another issue

          • Omer Preminger

            @Luke: I wonder what you have in mind in terms of “different syntactic properties” of _must_ vs. _have to_ that would account for the inability of _must_ to occur in nonfinite environments. I’ve encountered the claim that _must_ is “very high” in the clause and therefore, e.g., the relevant infinitives are too small to include the projection it’s hosted in. But that always struck me as a failed argument: prepositional-complementizer _for_ displays a set of extraction restrictions paralleling the that-trace effect, and so whether or not it’s truly a preposition, it is definitely a complementizer. And yet _must_ can’t occur in _for_-infinitives, either:

            (1)
            a. * It’s possible for Kim to must leave early.
            b. * It’s possible for Kim must to leave early.

            In finite clauses, _must_ is still lower than the surface position of the subject, which is *somewhere* in the TP field, regardless of where exactly one thinks that is. Since the embedded clauses in (1a-b) are necessarily CPs (see above), there is no explanation based on clause-size that would account for why _must_ can’t appear in (1a-b).

            The only thing left that I can imagine saying is that _must_ literally competes with _to_ for the same structural position; but that then leaves unexplained why you can’t say _Kim will must leave early_, since it’s obviously not competing for position with a _to_ there.

            Of course, any two of these can have different explanations (e.g. _must_ does compete with _to_ for position, but also, the complement domain of _will_ is too small to host _must_). But I think the explanation whereby _must_ just lacks an elsewhere(=nonfinite) form provides a single, unified way to derive all of these…

          • Luke Adamson

            @Omer I don’t see “must” and “have to” as the same type of syntactic object even if they both have modal semantics. Call “must” a realization of a Mod head; it does seem like it’s got to be higher up than “has to”, which behaves more like main verbs with respect to e.g. do-support and the position of negation:

            Must she leave?
            *Does she must leave?
            *Has she to leave?
            *Has-to she leave?
            Does she have to leave?

            She must not leave.
            *She (does) not must leave.
            *She has not to leave.
            #She has to not leave.
            She doesn’t have to leave.

            One problem with the Vocabulary approach for “must” and other modals is the evidence from ellipsis. I say some stuff about ellipsis and defectivity with respect to “stride” in my dissertation (as well as the “abolir” gap in Spanish) – basically, the gapped objects are licit under ellipsis, which is expected under a PF-based account of their defectivity.

            Jane strode into the room even though she shouldn’t have.

            Observe that if we thought the problem with “must” and other modals was the Vocabulary, we would expect it to be possible to have “must” to be okay in an elided nonfinite constituent, but it isn’t –this is actually one of the topics in this paper from Mendes and Nevins on LingBuzz (https://ling.auf.net/lingbuzz/004843).

            *John must leave, but I don’t.
            John has to leave, but I don’t. (Mendes and Nevins ms., p. 13)

            M&N suggest that the restricted distribution comes from a “deeper property” which says that there’s no syntactic formative of a modal + [+fin] (or something like that). This is what I had in mind

          • Omer Preminger

            @Luke: I’m familiar with the M&N story, and I don’t think it works. As far as I can tell, for every case they show where the putative explanation is absence of a narrow-syntactic entry for non-finite _can_ or _must_, a parallel case involving _have_ or _be_ can be constructed which similarly fails. And since the claim that _have_ and _be_ lack non-finite entries is a nonstarter, I conclude that the form of the argument must be incorrect.

            Specifically, compare:

            (1)
            a. *John must leave, but I don’t.
            b. John has to leave, but I don’t. (M&N)

            (2) *John is working, but I don’t.

            (3) *John has been working, but I don’t.

            In this light, all that (1a) shows is that _must_ is higher than the constituent that VPE targets. And indeed, as you show based on subj-aux inversion, _must_ *is* higher than lexical verbs. And modal _have to_ does indeed behave like a main verb in that respect. My point from _for_-infinitives, earlier, is that this fact about the height of _must_ cannot provide a unified account for the impossibility of nonfinite _must_, whereas the PF gap account does. It’s true that if M&N had a real argument that _must_ behaved unlike other, “established” PF gaps like _stride_, that would be a problem for my story. But I don’t think they do.

          • Luke Adamson

            @Omer As I see it, the “no syntactic formative” account actually provides unification that the PF gap account doesn’t. We’ve been discussing “must” — but the same restrictions apply to the other high modals (that can raise to C, etc.); “can”, “would”, “may”, and the rest aren’t acceptable in nonfinite environments, either. A PF gap account treats this generality as accidental: all of the Vocabulary Items for modals would happen to lack elsewhere items. The “no syntactic formative” account can say instead that the combination of Mod + [+fin] is syntactically illicit, regardless of what other features Mod may have that distinguish e.g. “must” from “may”.

            The question (or one of the questions) is why “must” and other modals can’t appear in nonfinite environments, and the two main options we’re considering as I understand it are as follows. Option (i) says that there’s nothing wrong with inserting modals into low positions in the syntax, but they don’t have a PF realization if they don’t combine with T[+fin]. Option (ii) says these modals can’t be constructed in low positions in the syntax at all, because they have to combine with T[+fin] in the syntax (e.g. via adjunction from head movement to T). If (i), we’d expect it to behave like “have to” with respect to VPE but not other “low” diagnostics. VPE doesn’t care about whether there is a recoverable PF realization or not — hence the compatibility of VPE with genuine PF gaps. Non-elliptical low environments do care about PF realization (because the modals have to be realized). However, in addition to not working in low environments generally, “must” and other modals don’t work in VPE, supporting Option (ii) over Option (i).

            You’re saying that we know that “have” and “be” can’t always be elided, so maybe the VPE evidence is not so conclusive for “must”. It’s certainly the case that a more complex theory is required (and has been developed/is being developed) for capturing the elliptical behavior of “have” and “be”, which are indeed elidable in some but not all grammatical circumstances — sure. But nonfinite “must” is easier for the theory of ellipsis than “have” and “be”, because, as far as I can tell, “must” (and the other modals) can’t ever belong to a nonfinite ellipsis site (i.e. it’s not just bad in the M&N examples). The “no syntactic formative” account says that this is because it can only be constructed in the syntax as a high modal, above the possible target of VPE. If we were to assume that Mod can only merge in the syntax with T[+fin], then we can have a coherent explanation for why a modal can’t be in a VPE site, it can’t appear in nonfinite clauses, and it can’t be within a finite clause under other finite material, and this holds across modals that can go up to C.

            (With respect to the complementizer “for” point — I don’t think the syntactic formative account has to say anything about clause size. We can still recognize some clauses with complementizers as being nonfinite, so “must” isn’t syntactically licit with the nonfinite T heads of nonfinite CPs.)

          • Omer Preminger

            @Luke: Sorry, but you lost me here. We have a diagnostic (ability to be elided under VPE as an indication of having a syntactic [-fin] entry) that we know is flawed. This is not up for debate, as far as I can tell: the diagnostic delivers incorrect verdicts for elements we know to *have* a [-fin] entry. Yes, it’s possible that a more sophisticated story could be crafted to get out of the trouble with _have_ and _be_; but if such a story can be crafted, then, logically speaking, one could also be crafted for _must_ and its friends. In sum, the one piece of evidence that existed indicating that _must_ is any different from _stride_ has unraveled. (More on that below.)

            You’re right that on the PF gap account, it comes out as an accident that _can_, _must_, _may_, and _would_ all have this [-fin] PF gap. The way your story achieves generality here is by assuming that the elements in question are all instances of Mod, and it is Mod (rather than the individual instantiations thereof) that exhibits the relevant restriction (for you, the need to merge syntactically with T[+fin]). So perhaps one can decompose _can_, _must_, _may_, and _would_ into X+c, X+m1, X+m2, and X+w, such that X is the verbal piece, and it is X that has a [-fin] PF gap. I’d be happy with that.

            Let me put my cards on the table, though, and tell you where I think the story I’m telling really has the advantage over the M&N story: language acquisition. We know that this property of modals lacking a nonfinite form is not crosslinguistically stable. (It’s not even stable across all dialects of English.) And so there is language acquisition involved. Now if you have a story for how a learner, without negative evidence, can tell apart a _stride_-type gap from the kind of gap M&N hypothesize to exist for _must_, I’m all ears. This strikes me as challenging, to put it mildly. So any story that reduces the two kinds of gaps assumed by M&N to one (i.e., the only kind of gap is a PF gap) is on much, much better footing, as far as I can tell.

          • Luke Adamson

            @Omer Re: acquisition – I think a learner could get positive evidence for the grammatical statuses of “must” and “stride” that capture the distributional restrictions at different levels of representation. For “stride”, a learner could be picking up on an implicational relation of irregularity between participial and past tense forms — Charles’ book says that speakers who observe an irregular form of the past tense have to get the participial form in the input because there’s no productive generalization for deriving one irregular form from another. For “must” and other modals, a learner might pick up on how high they are from e.g. subj-aux inversion, the position of sentential negation, etc. Then if “Mod” comes to constitute a higher syntactic category that is distinct from that of lexical verbs, a learner might need evidence of lexical verb uses of these modals in the input in order to use them that way (=there’s no implicational relation between being usable as Mod and being able to merge lower as a lexical verb); ditto for finite uses of Mod. That allows for some variation – some learners might have evidence for these elements being more syntactically flexible — so learners of other varieties who hear double modals have evidence that these elements do different things.

            There’s a wonderful dissertation by Ava Irani from 2019, who shows how learners use positive evidence to identify the grammatical status of verbal stuff. She pursues a Sufficiency Principle account for restrictions in syntactic distributions and retreats from overgeneralization (e.g. wrt causativity alternations). She doesn’t discuss modals specifically, but I think her work offers a way of understanding what the scope/limitations of syntactic generalizations are for a learner who relies on positive evidence.

            Thanks for the stimulating discussion — happy to continue reading follow up, but I think I’m going to refrain from replying at this point.

          • Omer Preminger

            @Luke: thanks for the reference to Irani’s thesis. I hadn’t seen it. But I think it’s fair to say that at this point, you’ve sketched a much more complicated story, whose one virtue is that the gap that’s shared across _can_, _must_, _would_, etc. is no longer accidental, and whose veracity (in the domain of the acquisition of modals, that is) is promissory. So, speaking for myself, I remain wholly unconvinced that there’s anything different between the gap in participial _stride_ and the gap in nonfinite _must_.

            Thank you, as well, for the stimulating discussion. I’m not quite sure why you felt the need to preemptively declare you’re not going to reply, but I for one would be glad to keep reading what you have to say.

          • Luke Adamson

            @Omer Oh, that wasn’t intended as a slight or anything! I just need to attend to other things, and that was my way of offering some form of closure rather than ghosting.

          • Omer Preminger

            @Luke: Ah, that makes sense 👍

  3. Charles Yang

    The four generalizations from Trommer (2020) are all distributionally and numerically derivable from an appropriate vocabulary of young children, as I showed in Section 4.4 of the 2016 book: I explicitly targeted Generalization II and III as most of the acquisition work focuses on the selection of the suffix but the umlaut patterns are simple to handle.

    If that’s correct, then additional apparatus in the grammatical framework would be superfluous, as far as the generalizations are concerned. Ditto for the status of -s here, and the issue of paradigm gaps more generally.

    • Alec Marantz

      I was perhaps misleading with respect to the point of bringing up Trommer when I suggested that his phonological assumptions might be “funky.” In fact, Trommer is not suggesting that any additional apparatus is needed to account for the German plurals. However, by claiming that the plurals follow from a single plural affix and regular phonology, he is making claims about the representation of “irregularity” (variation in surface appearance of the plurals) that differ from what Charles or Maria Gouskova might assume.

      If Trommer is on the right track, then (with the possible exception of the borrowed stems that take the -s plural), there are no sub-lexicons or sub-phonologies relevant to the German plural, and no lists over which phonological generalizations might be made. Rather, the correct storage form of German noun stems paired with the single, correct storage form of the plural affix yield all the various plurals. A fully worked out story for Trommer would probably make different predictions about the course of acquisition of the plural by German children than Charles’s account, but we’d have to see.

      To evaluate Trommer’s claims about the structure of phonology, one would need to examine the parameter space of languages that are compatible with his assumptions. The German plurals are a fairly dramatic example of surface variation that follows, for Trommer, from regular phonology and unique underlying phonological forms for stems and suffixes. What other patterns would his theory predict we should see across languages? Maybe phonology really is funky, as Trommer would have it.

      • Charles Yang

        A couple of follow-up points.

        More narrowly, the phonological generalizations in Trommer 2020 are not strictly true. For instance, regarding Generalization III. Barkte e tal. (2005, J. Neurolinguistics) give CELEX data for nouns ending in a schwa. Of these, the feminine nouns do add -n overwhelmingly but one exception (out of 903) does take the null suffix. The non-feminine nouns, however, a majority also take -n, but there are 11 (out of 97) that take the null suffix followed by umlaut. So the generalizations, as stated, are too strong. They are nevertheless productive, which is supported not only numerically (a la Tolerance) but also by an assortment of independent evidence from acquisition and processes, but they are not absolute.

        The larger point is that how much help can phonology lend to morphology. Can we parcel out morphological idiosyncrasies to phonology? I doubt it. Phonology has exceptions too: they are often morphologically conditioned (e.g.. trisyllabic shortening under -ity as in serene-serenity but obese-obesity is an exception) but needn’t be. From where I sit, the much-studied Philadelphia short-a system, /ae/ exceptionally tenses in “mad” and “bad” though not before voiced stops generally. Phonology is also funky, at least with respect to exceptions. (For the moment, let’s rule out contortionist analyses: one can always postulate wilder underlying phonological and morphological forms for “mad” and “sad” to restore regularity.)

        I look forward to clear evidence of phonological generalizations among English irregular verbs. Much of the past work relies on rating, a gradient task that can disguise underlying categoriticity, as Armstrong, Gleitman, and Gleitman (1983) showed with even and odd numbers. More empirically, while adults often rate pseudo-irregulars such as gling-gla/ung quite high in experimental tasks (though on average not as high as the -ed form), they appear not to do so in real language use. Children almost never extend irregular forms and I doubt adults do either. The past 20 or so years have given them two most favorable opportunities (i.e., verbs that end in -ing), but both “bing” and “bling” are regular. I blang/blung out for the Oscar party?

  4. Maria Gouskova

    My thinking on at least certain kinds of defaults is that they appear when no phonotactic generalizations can be made over the sublexicon that are distinct from the phonotactic generalizations over the language as a whole. This is not the most satisfying characterization of a default compared to a context-free morpheme realization rule, but it is one suggested by the theory. I think some of the Australian syllable-counting examples have that character: there will be an allomorph A that selects for a closed list of disyllabic stems (ss1, ss2, ss3, ss5, …), but then there is a second allomorph B that attaches to everything else, including some other disyllabic stems:

    i. X A / {ss1, ss2, ss3, ss5…} ___
    ii. X B / {ss4, sss1, ssss32, plus the rest of the language} ____

    So the question is whether (ii) is equivalent to something like this:

    iii. X B

    The setup I laid out in the 2015 paper does not automatically give us (iii), but there is a way to get to it. It would hinge on whether the learner discards sublexicons that can be described without a new grammar. Once such a sublexical grammar has been evaluated and discarded as uninformative, the rule can be reframed as context-free. (There is a mechanism for discarding contexts in Albright and Hayes’s model, but it is implemented very differently from what I just described).

    There may be some valid reasons for having some separation between the learning procedure (along with the knowledge of the lexicon) on the one hand and the formal statements of rules. Defaults can be very useful, after all. There are some cases where the sublexicon model cannot replicate an analysis with defaults, though. For example, you sometimes see the English indefinite characterized like this:

    Indef ən / __ V
    Indef ə / elsewhere

    For speakers whose “elsewhere” in this example is invariably consonant-final words, the phonotactic generalizations over “elsewhere” are definitely distinct from those over English as a whole. So cases of this type cannot be characterized as defaults.

    I have to think some more about what all of this means for defaults in other, non-phonological domains. Zooming out just slightly from phonological selection, the idea of “default” has been implicated in many discussions of the relationship between gender and declension class, which sometimes has a bidirectional character that I find it hard to wrap my head around. I think adding a mechanism for rule abstraction to the idea of sublexicons could help with getting a handle on these things.

    • Omer Preminger

      Clarification question, Maria: can you help me understand what you mean by “For speakers whose ‘elsewhere’ in this example is invariably consonant-[initial] words, the phonotactic generalizations over ‘elsewhere’ are definitely distinct from those over English as a whole”? Do you mean simply that the the ‘elsewhere’ class implicated in this formulation obeys a generalization (“words start with a consonant”) that is not true of English as a whole? If so, then I don’t quite understand how this is an argument against an account with defaults. Whenever the environment for the non-‘elsewhere’ rule is characterized by a generalization G (rather than a list of items), the elsewhere will consist of the entire language minus G-obeyers. (For simplicity let’s assume there’s only one non-‘elsewhere’ rule.) Now, if the language as a whole had no G-obeyers, then the non-‘elsewhere’ rule would be vacuous, so there must be some. Therefore the statement “there are no G-obeyers” will be true for the ‘elsewhere’ class and false for the language as a whole. This seems, then, to be a property of any defaults-based setup where the set of undergoers of the non-default case is statable as something other than a list. And so I don’t quite see how this could be an argument against the defaults-based treatment. Which leads me to suspect I’ve misunderstood something… Thanks in advance.

      • Maria Gouskova

        Ah, I wasn’t arguing against defaults–just saying that it is hard to characterize the indefinite allomorph “a” as default in a framework where there is a really clear context for the rule that the learner would notice. I think we might actually be talking about different things. Viewed from the point of view of pure theory, rules that have clean default contexts are preferred, and easy to write. But in a theory that is concerned with learning and generalization to novel items, it is really easy to write rules with very specific contexts, and making them clean takes an extra step that is often non-trivial. Is that clearer?

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 NYU MorphLab

Theme by Anders NorenUp ↑