Friday, May 15, 2020

The Beginning of Syntax (Version 3)

In this post, I argue that we, as a field, are at the beginning of the scientific study of natural language syntax. There are two aspects to this claim. First, work in generative syntax, even on a well-studied language like English, has just uncovered the tip of the iceberg in terms of documenting the relevant facts and generalizations. Second, we are also at the very beginning of understanding the mechanisms of UG that account for these generalizations.

The vastness of English grammar results from its combinatorial nature.

Consider the work reported in Collins and Postal (2012) on imposters, third person singular DPs, which refer to the speaker or the hearer. Common examples are DPs like yours truly and the undersigned. One can ask whether an imposter DP can bind a first or a third person pronoun. Going through the kinds of DPs that can bind pronouns with different phi-feature combinations led to some of the discoveries in our monograph. But there are many such combinations. The antecedent can be first, second or third person, singular or plural, masculine or feminine (24 feature combinations). Then the pronoun can have all these features as well. So that makes 24x24 = 576 binding possibilities. Then on top of all these combinations, there are various constructions to look at. Not all combinations yield interesting results but a surprising number do. I refer the reader to Collins and Postal (2012) for more information.

Before Collins and Postal (2012), imposters had not been studied before. So perhaps we can say that for just a few small overlooked areas there is more work to be done on English syntax. I suspect that there are not just a few overlooked areas, but rather many (see below on canalization). Furthermore, even for areas where hundreds of papers and books have been written, such as negation, passive and ellipsis in English, there are no doubt vast amounts of data, generalizations, insights and connections between phenomena that remain to be uncovered.

One reason that it is difficult to move past the tip of the iceberg is what one can call syntactic tunnel vision or data blindness or canalization. A very small number of topics in the syntax and semantics literature represent much of the effort that has been made. It is hard to notice facts off the beaten path. It is hard to study facts that you do not know about in some way in the first place. And if you do not get off that track, at most you are likely to discover details about what has already been discovered.

In a way, these details are the root of linguistic progress. In reading a recent overview article on ellipsis, I was pleasantly surprised to see the progress that has been made in that domain, essentially by making small incremental improvements to the sum total of what is already known. A similar process has led to advances in knowledge about locality of movement, binding, argument structure, etc. However, this incremental process makes it easy to overlook uninvestigated areas. The impressive achievements of generative syntax (see for example, D’Alessandro 2019), should not lure us into a false sense of complacency.

My perspective on English syntax differs radically from that of Pullum (2009: 18) (the italics are mine):

"Over the period from about 1989 to 2001, a team of linguists worked on and completed a truly comprehensive informal grammar of the English language. It was published as Huddleston and Pullum et al. (2002), henceforth CGEL. It is an informal grammar, intended for serious academic users but not limited to those with a linguistics background. And it comes close to being fully exhaustive in its coverage of Standard English grammatical constructions and morphology."

As Collins and Postal (2012: 258) note, most of what was covered in their book is not even mentioned in CGEL. Furthermore, there are topics (such as sluicing) which have been the subject of intensive investigation in the generative syntax literature that are only briefly mentioned in CGEL. I believe that in the above quote, Pullum has underestimated the vastness of English syntax. What would it take to get a solid overview of the kinds of syntactic generalizations that characterize English syntax? My guess, shooting in the dark, would be a volume at least 1,000 times larger than the current CGEL. Since CGEL has 1,860 pages, the proposed volume would have at least 1,860,000 pages. I suspect that even a volume this large would not be adequate to the task of giving ‘coverage of Standard English grammatical constructions and morphology.’

My sentiments rather echo those of Ross 2011, who writes: “At the very bottom of all the squibbing I have done is another unpopular conviction: that despite the immense and brilliant efforts of all of us OWG’s, the extent to which we have succeeded in staking out the basic lay of the land in syntax (or anywhere else), the degree with which we have ‘covered’ syntax is less than vanishingly small.”

Going beyond English and the very limited number of well-studied languages (from a syntactic point of view), there may be wildly interesting empirical phenomena just around every corner (e.g, indexical shifting, intervention effects, imposters, the verbal linker in the Khoisan languages) that necessitate new mechanisms, or new ways to look at the interfaces.

It is reasonable to believe that investigating other languages (e.g., Ewe, Setswana, Sasi, N|uu, Kua, Ju|’hoansi) to the same depth that English has been investigated would have important implications for Universal Grammar. Since there are (depending on how one counts) at least 6,000 languages now spoken on earth, the task takes on enormous proportions. Above, I speculated that an adequate grammar of English would have to be 1,000 times the size of the current CGEL. So, a similar investigation of the languages on earth would have to yield at least 6,000x1,000x1860 (11,160,000,000) pages of text.

Comparing these languages to each other gives rise to another layer of combinatorial complexity. Languages can be studied typologically, looking at large sets of languages and trying to uncover generalizations and gaps that might reveal syntactic properties of UG. But also, each pair of languages could be studied in combination, systematically comparing some phenomenon in one language to a similar phenomenon in the other. The number of pairwise combinations of languages is at least 6,000 choose 2 (= 6,000x5,999/2, which is approximately 36,000,000/2 = 18,000,000) for work on a single construction. Each such comparison could bring to light new kinds of facts and generalizations about individual languages and also about correlations between phenomena, and about the limits of cross-linguistic variation.

To give a simple example, in ongoing work with Nikos Angelopoulos and Arhonto Terzi, we have been investigating passive by-phrases and anaphora in Greek and English. In making a close comparison between the two languages, we have dug up interesting facts about Greek and English passive constructions and the differences between the two.

How many constructions are there to compare cross-linguistically?

To get a bare bones listing of the relevant constructions, see the Blackwell Companion to Syntax (e.g. across-the-board phenomena, auxiliary selection, complementizer-trace effects, Condition C violations and strong crossover, derived nominals, double object constructions, extraposition, free relatives, gapping, implicit arguments, logophoricity, middles, the person case constraint, quantifier scope ambiguities, quantifier float, right node raising, secondary predication, tough-Movement, VP-ellipsis, weak crossover, wh-in-Situ). The Blackwell Companion lists 123 topics, but clearly there are lots of topics that do not appear on that list (for example, I see no entry for imposters), and many of the topics are quite broad and can be broken down into many significant sub-topics.

Another way to think about the issue of the number of relevant constructions is this: the list of constructions that are investigated cross-linguistically should be such that if they were investigated for English alone, the result would yield a comprehensive grammar of English.

Some of the pairwise comparisons based on particular constructions might be easy to do. For example, English does not have auxiliary selection, and Ewe has neither auxiliary selection nor a passive construction. But even in these cases one can ask the difficult question: What does the absence of X (e.g., a passive construction or auxiliary selection) in language L tell us about L? What does the absence of X correlate with? What deeper principles of UG are involved in accounting for the absence of a construction? Putting aside such cases, a significant fraction of the comparisons would be quite rich, yielding important information about the structure of UG (as I have already indicated with the example of by-phrases in the Greek passive above).

So far, I have tried to show that we are just at the beginning of examining the syntactic phenomena of English and other languages. But if we had the right theory, unexplained generalizations would unravel under further examination. For a significant range of cases, the theory would successfully account them. So, when talking about the beginning of syntax, it is not so much the scope of the phenomena that is relevant, but the creation of a theory that explains those phenomena.

I am also claiming that we are at the very beginning of constructing a successful theory of natural language syntax. That is, we are at the very beginning of understanding the mechanisms of UG, including structures, operations, principles, the way all of these fit together and how syntax relates to phonology and semantics. Why are we at the beginning of syntax with respect to understanding mechanisms?

First, we do not have a good grasp of the relevant facts and generalizations both for English alone and cross-linguistically (as discussed in the preceding paragraphs). Given this, it is a safe bet that we are not very close to understanding the mechanisms underlying these facts and generalizations. Time and again, interesting constructions have been revealed by cross-linguistic work, and investigating these constructions gives us a direct window on UG (the kinds of operations, principles and structures that define it).

These remarks are consistent with Van Riemsdijk 2018, who in commenting on the work in the Blackwell Companion to Syntax, writes: “What is important here is that constructions have properties, and these properties constitute direct links to theoretical issues. Furthermore, these properties are recurrent properties in the sense that most of these properties are likely to be found in other constructions in the same language as well as in constructions in other languages. In that sense such properties are part of a dense network. And in this network certain properties seem to cluster, that is, they seem to co-occur frequently, linking together certain constructions previously not considered to be related. Thereby such networks of properties result in numerous questions that together make up major challenges for theoretical accounts. That’s the way things often tend to go: as we learn more about a construction or a cluster of constructions and as we associate a set of properties to that construction, we formulate a number of serious challenges to some of the core principles of the theory of syntax.”

Second, for each syntactic generalization, the question is how to analyze it in a given syntactic framework of assumptions. There are many generalizations in our field that have solid widely accepted explanations (e.g., the empirical diffierences between unergative and unaccusative verbs, and the structural analysis in terms of the underlying position of the subject). But, I would say it is much more frequent to encounter generalizations that either have no satisfactory theoretical explanation at all or two or more competing explanations. To take a simple example, the Dative Alternation (relating give Mary the money to give the money to Mary) has been studied for several decades, but to date there is no widely accepted analysis of it, and in fact there are advocates of completely different analyses. Finding solid explanations will certainly lead us to a deeper understanding of the mechanisms of UG.

My remarks are supported by the observation that the list of achievements in D’Alessandro (2019) is in large part focused on generalizations, and not on theoretical explanations of those generalizations. Here is one example from the paper (pg. 15):

Parasitic gaps [An A-bar chain can license an otherwise illicit gap in an adjunct]: Ross (1967).

Certainly, it is a major scientific achievement that Ross noted the existence of parasitic gaps as an interesting syntactic phenomenon. It is also a scientific achievement that so many of their properties have been uncovered throughout the years (e.g., the fact that parasitic gaps can be used to diagnose A’-movement). But it would be more significant, as far as developing a scientific discipline, to list a successful analysis. The existence of such an analysis would demonstrate understanding of UG, and serve as a kind of fixed point for other syntactic investigations into the structure of UG. Many other achievements listed in the paper fall under the same rubric (empirical generalization listed, no theoretical explanation given).

The point I am trying to make is that presenting the achievements of generative grammar in terms of generalizations, and not in terms of the mechanisms of UG underlying those generalizations shows that we are a very young field. I am in no way criticizing D’Alessandro’s very useful paper, or the work in generative grammar that she summarizes.

Third, the field of natural language syntax has not really come to grips with alternative frameworks yet. It is striking that there are different frameworks that deal with the same phenomena, but in seemingly very different ways. For example, Relational Grammar takes grammatical relations to be primitives, but Principles and Parameters takes them to be defined (e.g., the subject is defined as Spec TP). More abstractly, there are both proof-theoretic (generative) approaches to syntax and model-theoretic ones (e.g., Arc Pair Grammar). In both of these cases, there might be a right and a wrong answer. However, we must also be open to the possibility that there is some deeper and more abstract way to understand these different frameworks so that in particular cases (with respect to particular mechanisms) they reveal themselves as two sides of the same coin. Other questions include: To what extent can two analyses be understood to be basically the same, even if they are formulated in two completely different frameworks? How do we argue about analyses cross-theoretically?

Fourth, a related issue comes up when considering syntactic versus semantic explanations for phenomena (and similar remarks hold for syntactic versus morphological explanations). Should a particular phenomenon (e.g., NEG Raising) be understood as primarily a syntactic phenomenon or semantic phenomenon? I believe that there are answers to this kind of question, but they are not easy to resolve. It takes years of careful work to know for any given phenomenon whether it has a syntactic explanation or a semantic explanation.

Fifth, even focusing very narrowly on the core syntactic mechanisms of minimalist syntax (see Collins and Stabler 2016 for a formalization), it is clear that we are far from approaching a deep understanding. Some difficult foundational questions include the following: How is it possible to distinguish copies and repetitions (see Collins and Groat 2018)? Are chains necessary? How can informal ideas about spelling out occurrences, remnant movement, linear order and reconstruction be incorporated into the formal definition of Transfer? How does Transfer apply in the course of a derivation? Are labels needed in syntactic representations? How are spelled-out chunks of structure related to one another (the Assembly Problem)? How is Merge defined? How do workspaces enter into the definition of Merge? What are the empirical consequences of the various theoretical choices for the above mechanisms? These questions are all areas of current theoretical discussion.

These observations about the beginning of syntax are not meant in any way to diminish the accomplishments of current syntactic theories. Such theories have yielded deep insights into UG, but we should not thereby reach the conclusion that we are at the end of the study of natural language syntax.

In this, I disagree strongly with Marantz 1995 who envisions an end to syntax (see appendix for full quote):

“A vision of the end of syntax – the end of the sub-field of linguistics that takes the compositional system, between the interfaces, as its primary object of study – this vision encompasses the completion rather than the disappearance of syntax.”

What are the consequences of these remarks for the field of natural language syntax? I leave this to another post.

Acknowledgments: I thank Nikos Angelopoulos, Noam Chomsky, Erich Groat, Richie Kayne and Paul Postal for remarks on an earlier version of this blog post.

Appendix: Marantz 1995
“In closing, I would like to discuss a certain radical flavor to the MP in Chomsky (1992) and in his Bare Phrase Structure theory of chapter 8. In contrast to the wide-ranging discussion of somewhat intricate data from a number of languages found in Chomsky (1981), for example, Chomsky's latest papers (1992, this volume) treat very little data, and the discussion of data itself is somewhat programmatic. We should not interpret this move to minimalist syntax as a rejection of the enormous volume of extraordinary work within the P&P approach since the early 1980s. On the contrary, this detailed and highly successful work on a wide range of languages has inspired Chomsky to envisage the end of syntax per se. From one point of view, explanations in current syntactic work are emerging at the interfaces with phonology, and, perhaps more extensively, with semantic interpretation (as this is commonly understood). The syntactic engine itself - the autonomous principles of composition and manipulation Chomsky now labels ‘the computational system’ - has begun to fade into the background. Syntax reduces to a simple description of how constituents drawn from the lexicon can be combined and how movement is possible (i.e. how something other than the simple combination of independent constituents is possible). The computational system, this simple system of composition, is constrained by a small set of economy principles, which Chomsky claims enforce the general requirement; ‘do the most economical things to create structures that pass the interface conditions (converge at the interfaces).’ The end of syntax has no immediate consequences for the majority of syntacticians, since most of us have been investigating the interfaces whether we acknowledge this or not. After all, word order is phonology and we have always investigated ‘sentences’ (strings or structures of terminal nodes) under particular interpretations, i.e. with particular assumptions about their LF interface. Chomsky's vision of the end of syntax should have the positive consequence of forcing syntacticians to renew their interface credentials by paying serious attention to the relevant work in phonology and semantics. We should not interpret the diminished role of the computational system within the MP grammar as somehow an abandonment of a previously ‘autonomous’ syntax. The question of the autonomy of syntax has had different content at different times, but whatever the meaning of ‘autonomous,’ syntax in the MP is as autonomous, or non-autonomous, as it ever was. As always, syntax -  here  the computational  system -  stands between  the interfaces  and  is  neither a phonological nor a semantic component. And, as always, syntax trades in representations that are themselves neither phonological nor semantic. A vision of the end of syntax - the end of the sub-field of linguistics that takes the computational system, between the interfaces, as its  primary  object  of  study -  this  vision encompasses  the completion  rather than the disappearance of syntax.

References:

Collins, Chris and Erich Groat. 2018. Copies and Repetitions. Ms., NYU.

Collins, Chris and Paul Postal. 2012. Imposters. MIT Press, Cambridge.

Collins, Chris and Edward Stabler. 2016. A Formalization of Minimalist Syntax. Syntax 19, 43-78.

D’Alessandro, Roberta. 2019. The Achievements of Generative Syntax: A Time Chart and some Reflections. Catalan Journal of Linguistics, Special Issue, 7-26.

Everaert, Martin and Henk van Riemsdijk. The Wiley Blackwell Companion to Syntax, Second Edition. Blackwell, Oxford.

Huddelston, Rodney and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press, Cambridge.

Marantz, Alec. 1995. The Minimalist Program. In Gert Webelhuth (ed.), Government and Binding Theory and the Minimalist Program. Blackwell, Oxford.

Pullum, Geoffrey K. 2009. Computational linguistics and generative linguistics: The triumph of hope over experience. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics, 12-21. Athens, Greece. Association for Computational Linguistics.

Ross, Haj. 2011. Alumni Reply for 50 Years of Linguistics at MIT.

van Riemsdijk, Henk. 2018. Constructions. In Norbert Hornstein, Howard Lasnik, Pritty
Patel/Grosz, and Charles Yang (eds.) Syntactic Structures after 60 Years: The Impact of the Chomskyan Revolution in Linguistics. Berlin: Mouton de Gruyter, 317-329.


1 comment:

  1. Reply by Omer Preminger:

    I agree with much of what you write here. I'd like to point out something that I think is often lost in this shuffle, though:

    When it comes to what you call the "core syntactic mechanisms of minimalist syntax" (e.g. phases, Transfer, etc.), there is often an implicit assumption that the function relating these mechanisms to our assumptions will be a relatively "smooth" one. The belief seems to be that even if we don't have all the mid-level generalizations perfectly right yet, the step of reasoning from these tentative mid-level generalizations to abstract computational principles (of the kind Chomsky has been preoccupied with in the last 25-30 years) is a safe one, because minor perturbations in the mid-level generalizations (e.g. as the result of further discoveries) will result in only minor perturbations in the abstract computational principles proposed to underlie these generalizations. Minimalists seem to think that the answers to "Why is language like this?" will not skew wildly as the result of minor changes to what we think language is like.

    Personally, I've never understood what justifies this belief. Consider the Strong Minimalist Thesis (SMT), for example – the claim that beyond its recursive combinatorial property (viz. Merge), syntax has no sui generis properties, only the requirements imposed by Interface Conditions (and third-factor considerations pertaining to efficient computation in general). Most careful work on the syntax of phi-feature agreement now assumes, whether tacitly or explicitly, a model incompatible with the SMT: the reason probes probe cannot, it turns out, be reduced to the assumption that if they didn't probe, something would go amiss with respect to Interface Conditions. Examples include: Bejar's (2003) Cyclic Agree proposal (where any representational lacunae not fixed upon the first cycle of probing could be remedied on the second cycle, leaving no Interface-based reason for the first cycle not to be purely optional), my own 2014 work (using omnivorous agreement to show that even single-cycle probing cannot be driven by, e.g., "uninterpretability" of features), and Deal's (2015) Interaction & Satisfaction framework (same, but stronger).

    This upends minimalists' understanding of syntax as "Merge plus interface conditions." So it sure looks to me like the abstract function relating our generalizations to the explanations thereof may be more like y=1/x circa zero (minor changes in x skew y wildly) than it is like y=1/x when x approaches infinity (minor changes in x skew y almost not at all).

    ReplyDelete

Note: Only a member of this blog may post a comment.