Author: Alec Marantz (Page 6 of 11)

Phonology, phonology, phonology

The last post re-examining Pinker’s Words and Rules framework left open the question of the linguistic representation of “irregularity.” What in fact do speakers know about, e.g., the distribution of suffixes like the phonologically zero past tense ending (on, say, the past tense of fit) or the restricted nominalizer –ity? Tied to this question is whether “regular” affixes (like the English /d/ past tense or /z/ plural) enjoy any special representational status. Within Distributed Morphology (DM), this last question involves the potential existence and status of “default” phonological forms (Vocabulary Items) for morphemes, forms that are inserted to realize feature bundles without contextual restrictions.

At their core, irregular affixes involve lists – in particular, lists of environments in which they occur. For example, we have suggested that –ity attaches to adjective suffixes –al and –able, as well as a list of stems that includes sane and vain. Similarly, the irregular past tense forms of English implicate lists of verbs that exhibit one or another of several irregular patterns.

A crucial insight about lists, made explicit in work by Maria Gouskova (e.g. Gouskova et al. 2015), is that lists of forms are sublexicons (subsets of the lexicon) and as such are amenable to phonological analysis. That is, to the extent that the phonotactics (generalizations about possible sounds and sequences of sounds) of a language is computed over the lexicon, each list involved in “irregular” morphology should generate a phonotactic grammar summarizing phonological generalizations over members of the list. When new forms are presented to the learner, they can be evaluated with respect to the phonology of the “irregular” lists to determine the likelihood that they will also be subject to the relevant irregularities (should the nominalized form of a new adjective prane be pranity?).

Although not explained or implemented in precisely these terms, the work of Albright and Hayes (2003) on the acquisition of the English past tense also involves the learner considering the phonological form of stems in creating possible (irregular and regular) rules of past tense formation. Their research argues that any learning mechanism that is searching for phonological grounding of morphological rules (for them, the phonology of the input to the rule) will learn generalizations about the distribution of apparent regular (or default) morphology alongside that of the irregulars. For the English past tense, this means that speakers will learn that certain stem endings will reliably take the regular /d/ past tense, while for other phonological forms of stems, the use of the regular is less certain. They present behavioral evidence in favor of speakers’ representation of phonological “islands of reliability” for both regular and irregular past tense endings. This brings into question the relevance of the notion of a “default” ending, since regulars and irregulars are learned and implemented by the same phonologically sensitive mechanisms. In other words, regulars may be no more default than irregulars. A nice aphasia study supporting this conclusion about regulars can be found in Rimikis and Buchwald (2019).

I will return in a later post to some of the many important implications of Gouskova’s work for morphological processing. Here I would just like to emphasize the importance of the phonology of lists for generating predictions about the processing of morphologically complex words that go beyond the frequency of stems and of transition probabilities. In recognizing the word sanity, for example, it might matter how well sane obeys the phonotactics of the –ity list. In an information theoretic sense, the worse the fit of the stem to the list, the more information is being conveyed by the word. In addition, Gouskova proposes that the outputs of affixation are also evaluated by the phonology, meaning all the –ity nouns are also on a list governed by its own phonological grammar. So, in recognizing sanity, the well-formedness of sanity with respect to the list of –ity nouns might also matter.

As a last plea for researchers interested in morphological processing to pay crucial attention to what the phonologists are saying, I would like to point you to a recent paper on German plurals by Jochen Trommer (2020). Trommer presents a phonological analysis of the various German plural formations. In the Words and Rules approach, the –s plural is considered “regular” (in the sense of default), while the various other plural endings with or without stem umlaut are considered learned irregulars. Although his phonological assumptions are a bit funky, Trommer shows how to analyze all but the –s plural as regular, based on the phonology (and gender) of the stem. Here are the main generalizations captured by his analysis, taken from his example (20):

Generalization I: Umlaut and plural -n are in complementary distribution

Generalization II: Feminine nouns strongly prefer plural -n in contexts where non-feminine nouns do not

Generalization III: Noun roots ending in …ə always take plural -n (and consequently never show umlaut)

Generalization IV: Nouns with plural -er always umlaut

The plural –s, by contrast, goes on a list of borrowed stems. In sense, then, the plural –s is an irregular plural. One can imagine a phonological grammar for these stems in Gouskova’s terms, so that there is no real “default” –s. Or, perhaps there is room in linguistic theory for defaults, properly described. But at the moment, morpho-phonology is better equipped to handle morphology without defaults than a theory with them.

In sum, the representation of so-called “irregularity” thoroughly implicates the phonology, and phonology may provide the variables that explain modulation in brain responses during the processing of morphologically complex words that go beyond the stem and transition probability variables we’ve been considering.

 

References

Albright, A., & Hayes, B. (2003). Rules vs. analogy in English past tenses: A computational/ experimental study. Cognition90(2), 119-161.

Gouskova, M., Newlin-Łukowicz, L., & Kasyanenko, S. (2015). Selectional restrictions as phonotactics over sublexicons. Lingua167, 41-81.

Rimikis, S., & Buchwald, A. (2019). The impact of morphophonological patterns on verb production: evidence from acquired morphological impairment. Clinical linguistics & phonetics33(1-2), 68-94.

Trommer, J. (2020). The subsegmental structure of German plural allomorphy. Natural Language & Linguistic Theory, 1-56.

Words and Rules redux

On the general line of thinking in these blog posts, a word like walked is morphologically complex because it consists of at least some representation of a stem plus a past tense feature (more specifically, a head in the extended projection of a verb). This is also true of the “irregular” word taught. Thus there is an important linguistic angle from which walked and taught are equally morphologically complex, whatever one thinks about how many phonological or syntactic pieces there are in either form.

Steven Pinker, in his (1999/2015) Words and Rules work, proposes a sharp dichotomy between morphologically complex words that are constructed by a rule of grammar and thus are not stored as wholes vs. complex words that are not constructed by a rule and thus are stored as wholes. For Pinker, the E. coli of psycho-morphology is the English past tense, which he began to study (in 1988, with Alan Prince) when preparing a response to McClelland and Rumelhart’s (1987) connectionist model. The idea was that the relationship between teach and taught is a relationship between whole words (like that between cat and dog), while the relationship between walk and walked is rule-governed, such that walked is not stored as a word and must be generated by the grammar when the word is used.

In point of fact, the English past tense is not a particularly good test animal for theories of morphological processing. The type of allomorphy illustrated by English inflection is limited and it confuses two potentially separable issues: stem allomorphy and affix allomorphy. For example, in the “irregular” past tense felt, we see the special fel- stem, where feel-would be expected, and the irregular –t affix, where –d would be expected (compare peeled). In canonical Indo-European languages with rich inflectional morphology, “irregular” (not completely predictable) forms of a stem can combine with regular suffixes, and unpredictable forms of suffixes can combine with regular stems. From a linguistic point of view, taught could be either a special stem form taught- with a phonologically zero past tense ending, a special stem form taugh- with a special ending –t (a wide-spread allomorph of the English past tense, but not generally predicted after stems ending in a vowel, where –d is the default), or a “portmanteau” form covering both the stem and the past tense – this last option seems to be what Pinker had in mind. Even mildly complex inflectional systems, then, exhibit a variety of types of “irregularity.” The very notion of “irregularity,” that a pattern is not predictable from the general facts of a language, implies that something needs to be learned and memorized about irregular forms. But the conflation of irregularity with stored whole forms, as in Pinker’s analysis of English irregular past tense, obscures important issues and questions for morphology and morphological processing.

A textbook case of irregular stems with regular endings occur in the Latin verb ‘to carry.’ As canonically presented in Latin dictionaries, the three “principal parts” of ‘to carry’ are ferō ‘I carry,’ tulī ‘I carried’ and the participle lātum‘carried’, with three “suppletive” stems. Crucially, each of these stems occurs with endings appropriate for the inflectional class of the stem (for contrasts like indicative vs. subjunctive and for person and number of the subject). It’s not at all obvious what the general Words and Rules approach would say about such cases, but memorization of whole words here doesn’t seem to be a plausible option. Once we sketch out a general theory of “irregularity,” the proper analysis of the English past tense should fall into line with what’s demanded by the general theory.

Pinker invokes an “add –ed” past tense rule when explaining his approach in general terms, but in his work he sometimes presents a more explicit account of how a grammar might generate the past tense forms in a Word and Rules universe. Here, the important concept is that a stored, special form blocks the application of a general rule.

The implementation follows the linguistics lead of Paul Kiparsky’s version of Lexical Phonology and Morphology from around 1982. At this point in time, Kiparsky’s notion was that a verb would enter into the (lexical) morphology with a tense feature. At “Level 1” in a multi-level morpho-phonology, an irregular form would spell out the verb plus the past tense feature, blocking the affixation of the regular /d/ suffix at a later level. The pièce de résistance of this theory was an account of an interesting contrast between regular and irregular plurals in compound formation. Famously, children and adults find mice-eater to be more well-formed than rats-eater. Kiparsky’s account put compound formation at a morpho-phonological “level” between irregular morphology and regular inflection, allowing irregular inflected forms to feed compound formation, but having compound formation bleed regular inflection (on an internal constituent). The phenomenon here is fascinating and worthy of the enormous literature devoted to it. However, Kiparsky’s analysis was a kind of non-starter from the beginning. The problem was that if irregular inflection occurs at Level 1 in the morpho-phonology, it should not only feed compound formation but also derivational morphology. So, someone that used to teach could be a taughter on this analysis.

To cut to the chase, the Words and Rules approach to morphology isn’t compatible with any (even mildly current) linguistic theory, and as noted above, it’s difficult to apply beyond specific examples that closely resemble the English past tense, where the irregular form may appear to be a portmanteau covering a stem and affixal features. However, Pinker has always claimed that experimental data support his approach, so it’s important to investigate whether his particular proposal about stored vs. computed words makes interesting and correct predictions about experimental outcomes. Here it’s important to distinguish data from production studies and data from word recognition.

For production, the Words and Rules framework was supposed to make two types of prediction, one for non-impaired populations and another for impaired populations. In terms of reaction time, non-impaired speakers were supposed to produce the past tense of a presented verb stem in a time that, for irregulars, correlated with the surface frequency of the past tense verb and, for regulars, correlated with the frequency of the stem. For impaired speakers, the prediction was a double dissociation: impairment to the memory systems would differentially impair irregulars over regulars, while impairment to the motor sequencing system would differentially impair regulars over irregulars. Michael Ullman took over this research project, using the English past tense as an assay for the type of impairment a particular population might be suffering (see, e.g., Ullman et al. 2005). In his declarative/procedural model, irregulars are produced as independent words, while regulars are produced by the procedural system, which is involved in motor planning and execution. However, for Ullman the story is clearly one about the specifics of production, and not about the grammatical system, as it is for Pinker. For example, his studies find that women are more likely than men to produce regular past tense forms at a speed correlated with surface frequency, which suggests that women memorize these forms, while (most/many) men do not. If Ullman were connecting his studies to the grammatical system, he would predict that women more than men would like rats-eater, for example. But his theory is about online performance rather than grammatical knowledge or use. By sticking with systems like the English past-tense, which confounds morphological affixation with phonological concatenation, Ullman can’t distinguish whether the declarative/procedural divide is about the phonological sequencing of the phonological forms of morphemes or about the concatenation of morphemes themselves.

A nice study by Sahin et al. (2009, which includes Pinker as co-author) does explore the neural mechanisms of the production of inflected forms with an eye to distinguishing phonological and morphological processing. Sahin et al. find stages in processing in the frontal lobe that are differentially sensitive to morphological structure (reflecting, say, the process “add inflection”) and phonological structure (reflecting, say, the process “add an overt suffix”), with the former preceding the latter. Interestingly, Sahin et al. found no difference between regular and irregular inflection.

In short, the conclusion from the production studies, no matter how charitable one is to Ullman’s experiments (see Embick and Marantz 2005 for a less charitable view), is that although phonological concatenation in production may distinguish between forms with overt suffixes and forms with phonologically zero affixes, no data from these studies support the Words and Rules theory when interpreted to be about morphological processing.

But what about processing in word recognition or perception? Here, it’s unclear whether there was ever any convincing support for the Words and Rules approach. Pinker and others cite a paper by Alegre and Gordon (1999) as providing evidence for the memorization vs. rule application distinction in lexical decision paradigms. However, Alegre and Gordon’s experiments and their interpretation, even if taken at face value, would hardly be the type of evidence one would want for Words and Rules. Their initial experiment finds no frequency effects for reaction time in lexical decision for regular verbs and nouns (expanding well beyond the past tense to other verb forms and to noun plurals) – neither “surface” frequency of the inflected form nor a type of base frequency (frequency of the stem across all inflections, which they call “cluster frequency”). In subsequent experiments reported in the paper, and in a reanalysis of their data from the first experiment, Alegre and Gordon claim that regularly inflected forms show surface frequency effects in lexical decision if they occur above a certain threshold frequency. If that were true (and subsequent work has shown that the generalization is incorrect), it would severely undermine Pinker’s theory. We’re not just talking about peas and toes here; “high frequency” and putatively memorized inflected forms include deputies, assessing, pretending and monuments. If the Words and Rules approach were the correct explanation of the data, we’d expect monuments-destroying to be as well-formed as monument-destroying. If we are indeed memorizing these not really so frequent inflected forms as wholes, the notion of “memorization” here must be divorced from any connection to grammatical knowledge.

However, Lignos and Gorman (2012) show that Alegre and Gordon’s results and interpretation can’t be taken at face value, pointing out a number of problems in the paper, including the reliance on frequency counts inappropriate for their study. The more robust finding is that the surface frequency effect is stronger, not weaker, in the low surface frequency range for morphologically complex words. Recent work in this area paints a complex picture of the variables modulating reaction time in lexical decision, which include both some measure related to base frequency and some measure related to surface frequency, but no current research in morphologically complex word recognition supports the key predictions of the Words and Rules framework, at least as laid out by Pinker and colleagues.

Recall that if you know the grammar of a language – if you’ve “memorized” or “learned” the rules – you have, in an important sense, memorized all the words (and all the sentences) that are analyzable or generatable by the grammar, even the ones you haven’t heard or spoken yet. That is, the “memorized” grammar generates words that you have already encountered or used in the same way it generates words that you haven’t (yet) encountered or used. In other words, when you’ve “memorized” the grammar, you’ve “memorized” both sets of words. From the standpoint of contemporary research in morphological processing, this understanding of “memorization” should replace the thinking of the Words and Rules framework, which makes speakers’ prior experience with words a crucial component of their internal representation.

However, it should be noted that Pinker’s main concern in Words and Rules was on language acquisition and the generalization of “rules” to novel forms. Recent work by Charles Yang (2016), Tim O’Donnell (2015) and others recasts the Words and Rules dichotomy between memorization vs. constructed as an issue of words following unproductive rules or generalizations, for which you have to memorize for each word that the rule or generalization applies (or memorize the output without reference to the generalization) vs. words following productive rules or generalizations, for which the output is predicted. Key data for these investigations come from wug tests of rules application to novel forms. An issue to which we will return soon is how these theories of productivity of morphological rules tie into models of morphological processing in word recognition.

 

References

Alegre, M., & Gordon, P. (1999). Frequency effects and the representational status of regular inflections. Journal of memory and language40(1), 41-61.

Embick, D., & Marantz, A. (2005). Cognitive neuroscience and the English past tense: Comments on the paper by Ullman et al. Brain and Language93(2), 243-247.

Kiparsky, P. (1982). From cyclic phonology to lexical phonology. The structure of phonological representations1, 131-175.

Lignos, C., & Gorman, K. (2012). Revisiting frequency and storage in morphological processing. In Proceedings from the Annual Meeting of the Chicago Linguistic Society, 48(1), 447-461. Chicago Linguistic Society.

O’Donnell, T. J. (2015). Productivity and reuse in language: A theory of linguistic computation and storage. MIT Press.

Pinker, S. (1999/2015). Words and rules: The ingredients of language. Basic Books.

Pinker, S., & Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition28(1-2), 73-193.

Rumelhart, D. E., & McClelland, J. L. (1987). Learning the past tenses of English verbs: Implicit rules or parallel distributed processing?. In B. MacWhinney (ed.), Mechanisms of language acquisition, 195-248. Lawrence Erlbaum Associates, Inc.

Sahin, N. T., Pinker, S., Cash, S. S., Schomer, D., & Halgren, E. (2009). Sequential processing of lexical, grammatical, and phonological information within Broca’s area. Science326(5951), 445-449.

Ullman, M. T., Pancheva, R., Love, T., Yee, E., Swinney, D., & Hickok, G. (2005). Neural correlates of lexicon and grammar: Evidence from the production, reading, and judgment of inflection in aphasia. Brain and Language93(2), 185-238.

Yang, C. (2016). The price of linguistic productivity: How children learn to break the rules of language. MIT Press.

What Marantz 1981 (probably) got wrong

A popular account of what Grimshaw called “complex event nominalizations” (cf. John’s frequent destruction of my ego) involves postulating that these nominalizations involve a nominalizing head taking a VP complement. When the head V of the VP moves to merge with the nominalizing head, the resulting structure has the internal syntactic structure of an NP, not a VP. For example, there’s no accusative case assignment to a direct object, and certain VP-only complements like double object constructions (give John a book) and small clauses (consider John intelligent) are prohibited (*the gift of John of a book, *the consideration of John intelligent).

Note that this analysis relies on the assumption that head movement (of V to N) has an impact on syntax. Before head movement applies, the verb phrase has verb phrase syntax, with the possibility of accusative case, small clauses and double object complements. After head movement applies, there is no VP syntax and the internal structure of the NP is that of any NP.

Within the development of Distributed Morphology, these consequences of head movement fit within the general approach of Marantz (1981, 1984) in which the operation of “morphological merger” (later equated with head movement and adjunction) causes structure collapsing. That is, when the verb merges with the nominal head, the VP effectively disappears (in Baker’s 1985 version, the structure doesn’t disappear but rather becomes “transparent”).

In a recent book manuscript (read this book!), Jim Wood (2020) argues that the VP account is not appropriate for Icelandic complex event nominalizations, and probably not right for English either. Among the pieces of evidence that Wood brings to the argument, perhaps the most striking is the observation that verbs in these nominalizations do not assign the idiosyncratic “quirky” cases to their objects that they do in VPs. If the VP analysis of complex event nominalizations is indeed wrong, then one might conclude that morphological merger-driven clause collapsing is simply not part of syntactic theory. It’s worth asking, however, what the motivation was for these consequences of morphological merger (or, head movement and adjunction) in the first place, and where we stand today with respect to the initial motivation for these mechanisms.

Allow me a bit of autobiography and let’s take a trip down memory lane to the fall of 1976. That fall I’m a visiting student at MIT, and I sit in on David Perlmutter’s seminar on Relational Grammar (RG). I meet Alice Harris and Georgian, my notebook fills with stratal diagrams, and I’m introduced to the even then rather vast set of RG analyses of causative constructions and of advancements to 2 (which include the “applicative” constructions of Bantu). Mind blowing stuff.  One aspect of RG that particularly stuck in my mind and that I would return to later was the role of morphology like causative and applicative affixes in the grammar. In RG, morphemes were reflective rather than causal; they “flagged” structures. So an affix on a verb was a signal of a certain syntactic structure rather than a morpheme that created or forced the structure.

In an important sense my dissertation involved the importing of major insights of RG into a more mainstream grammatical theory. (In linguist years, the fall of 1976 and the summer of 1981, when I filed my dissertation, are not that far apart.) Consider the RG analysis of causative constructions involving “causative clause union.” In this analysis, a bi-clausal structure, with “cause” as the head (the predicate, P) of the upper clause, becomes mono-clausal. Since the upper clause has a subject (a 1) and the lower clause has a subject (another 1), and there can be only one 1 per clause (the Stratal Uniqueness Law), something has to give when the clauses collapse. In the very general case, if the lower clause is intransitive, the lower subject becomes an object (a 2), now the highest available relation in the collapsed clause.


Stratal diagram for Korean causative of intransitive ‘Teacher made me return’ (Gerdts 1990: 206)

If the lower clause is transitive, its object (a 2) becomes the object of the collapsed clause, and the lower subject becomes an indirect object (a 3), the highest relation available.


Stratal diagram for Korean causative of transitive ‘John made me eat the rice cake’ (Gerdts 1990: 206)

In languages with double object constructions like those of the Bantu family, after clause union with a lower transitive clause, the lower subject, now a 3, “advances” to 2, putting the lower object “en chômage” and creating a syntax that looks grammatically like that of John gave Mary a book in English, which also involves 3 to 2 advancement.


Stratal diagram for Korean ditransitive ‘I taught the students English’ (Gerdts 1990: 210)

Within the Marantz (1981, 1984) framework, applicative constructions involve a PP complement to a verb, with the applicative morpheme as the head P of the PP. Morphological merger of the P head with the verb collapses the VP and PP together and puts the object of the P, the “applied object,” in a position to be the direct object of the derived applicative verb.

My general take in 1981 was that affixation (e.g., of a causative suffix to a verb) was itself responsible for the type of clause collapsing one sees in causative clause union. The lower verb, in a sentence that is the complement to the higher causative verb, would “merge” with the causative verb, with the automatic consequence of clause collapsing. I argued that a general calculus determined what grammatical roles the constituents of the lower clause would bear after collapsing, as in RG. There are many interesting details swirling around this analysis, and I proposed a particular account of the distinction between Turkish-type languages, in which the “causee” in a causative construction built on a transitive verb is oblique (dative, usually), and Bantu-type languages, in which this causee is a direct object in a double object construction.  Read the book (particularly those of you habituated to citing it without reading – you know who you are).

At this point of time, nearly 40 years later, the analysis seems likely wrong-headed. Already by the late 1980’s, inspired by a deep dive into Alice Harris’s dissertation-turned-book on Georgian (1976, 1981), I had concluded that my 1981 analysis of causatives and applicatives was on the wrong track. Instead of bi-clausal (or bi-domain in the case of applicatives) structures collapsing as the result of morphological merger, a more explanatory account could be formulated if the causative and applicative heads were, in effect, heads on the extended projection of the lower verb. Affixation/merger of the verb with these causative and applicative heads would have no effect on the grammatical relations held by the different nominal arguments in these constructions. This general approach was developed by a number of linguists in subsequent decades, notably Pylkkänen (2002, 2008), Wood & Marantz (2017) and, for the latest and bestest, Nie (2020), which I’ll discuss in a later post.

The crucial point here is that the type of theory that underlies the N + VP analysis of complex event nominalizations has lost its raison d’être, thereby leaving the analysis orphaned. If morphological merger has no effect on the syntax, at least in terms of the collapsing of domains, then a nominalization formed by an N head and a VP complement could easily have the internal VP syntax of a VP under Tense. This does not describe complex event nominalizations, which are purely nominal in structure, but the discussion so far does not rule out a possible class of nominalizations that would show a VP syntax internally and NP syntax externally. As we discussed in an earlier post, English gerunds are not examples of such a construction, since they are not nominal in any respect (see Reuland 1983 and Kiparsky 2017). However, maybe such constructions do exist. If they don’t, it would important to understand if something in the general theory rules them out. We will return to this issue in a subsequent post.

 

References

Baker, M.C. (1985). Incorporation, a theory of grammatical function changing. MIT: PhD dissertation.

Gerdts, D.B. (1990). Revaluation and Inheritance in Korean Causative Union, in B. Joseph and P. Postal (eds.), Studies in Relational Grammar 3, 203-246. Chicago: University of Chicago Press.

Harris, A.C. (1976). Grammatical relations in Modern Georgian. Harvard: PhD dissertation.

Harris, A.C. (1981). Georgian syntax: A study in Relational Grammar. Cambridge: CUP.

Kiparsky, P. (2017). Nominal verbs and transitive nouns: Vindicating lexicalism. In C. Bowern, L. Horn & R. Zanuttini (eds.), On looking into words (and beyond), 311-346. Berlin: Language Science Press.

Marantz, A. (1981). On the nature of grammatical relations. MIT: PhD dissertation.

Marantz, A. (1984). On the nature of grammatical relations. Cambridge, MA: MIT Press.

Nie, Y. (2020). Licensing arguments. NYU: PhD dissertation. https://ling.auf.net/lingbuzz/005283

Pylkkänen, L. (2002). Introducing arguments. MIT: PhD dissertation.

Pylkkänen, L. (2008). Introducing arguments. Cambridge, MA: MIT Press.

Reuland, E.J. (1983). Governing –ingLinguistic Inquiry 14(1): 101-136.

Wood, J., & Marantz, A. (2017). The interpretation of external arguments. In D’Alessandro, R., Franco, I., & Gallego, Á.J. (eds.), The verbal domain, 255-278. Oxford: OUP.

Wood, J. (2020). Icelandic nominalizations and allosemy. Yale University: ms. https://ling.auf.net/lingbuzz/005004

 

« Older posts Newer posts »

© 2024 NYU MorphLab

Theme by Anders NorenUp ↑