Wednesday, June 14, 2023

Guest Blog Post by Pauline Jacobson on Constituent Structure

Losing sight of the forest through the trees:  

Remarks on constituent structure and constituent structure tests 

Pauline Jacobson

Program in Linguistics

Brown University

Original .pdf Version

This is a guest blog which I was invited to write by Chris Collins. I am very grateful to Chris for the opportunity to write up and post these thoughts.  Needless to say, he doesn't necessarily agree with or endorse anything in these remarks. 

1.  Introduction

It has become common - seemingly almost mandatory - for both Introductory Linguistics books and Introductory Syntax books to lay out a battery of tests for constituency [1] -  usually some combination of  'movement' (or: ability to appear at the front); clefts;  ability to appear alone as a fragment (usually as an answer to a question); 'replacement by a proform' [2];  deletion; and conjunction. I have not done an exhaustive survey of Introductory books, but am basing this on three fairly current common Introductory Linguistics books:  Fromkin et al. 2000 (reprinted 2009); Ohio State Dept. of Linguistics Language Files 12th edition (Dawson and Phelan eds.) 2016; and O'Grady et al,  2017,  plus five current introductory syntax books:  Carnie 4th edition 2021; Adger 2003 (reprinted 2012);  Larson 2010; Koeneman and Zeijlstra  2017; and Kim and Sells 2008.  I take this is to be a reasonable sample.    

The purposes of these remarks are as follows.  Sec. 2 points out that the majority of these books commit - sometimes indirectly and by implicature (often via the exercises) - the fallacy of concluding that if something does not pass the test it is not a constituent.  Sec. 3 discusses a bigger picture goal.  This is to argue that putting such an emphasis on trees (where the tests for constituency are a part of that emphasis) obscures the more important project of trying to model speakers' knowledge' of the 'grammar'. (In the case of all of these books, the primary language being studied is English, although some have more cross-linguistic coverage as well.)  And if that knowledge has something to do with 'constructing trees',  that is only as a byproduct of speakers knowing the combinatory rule system. It is true that under a certain set of assumptions trees are a reasonable representation of (part of) that knowledge when applied to a given sentence, but in fact it is only under particular set of assumptions about what grammars are that it even makes sense to talk about 'the tree' for a given (unambiguous) sentence. [3]  And this is often not the assumption made by the end of these books (particularly Introductory Syntax texts). Of course initial oversimplifications are almost necessary, but the inconsistencies (or, need to revise) are generally never pointed out. Sec. 4 suggests that there are more general ways to discover the rule system - even at an elementary level.  Moreover, while the standard 'tests' themselves are not without interest,  it would be better to explore just what they are tests for (and why most of them are weak tests). Sec. 5 discusses my own reason for being interested in the way this is presented.  As a proponent of a particular version of Categorial Grammar, I believe that a given (unambiguous) sentence can be put together in many different ways and so has many different 'constituent structures'  (as noted in fn. 3, most theories actually are committed to that in a few cases,  but Categorial Grammar allows for far more ways to put together an unambiguous sentence).  And this flexibility is supported by what I think is the most reliable of the 'tests', which is conjunction.  For example, this view means that in "Lee thought that someone had stolen the peanut butter", there is an analysis by which "thought that someone" is a 'constituent' - as witnessed by conjunctions such as "Lee thought that someone and was determined to find out just who had stolen the peanut butter".  But many linguists gasp at the claim that "thought that someone" could possibly be thought of as a constituent: after all, it passes none of the other tests!  It seems to me that that automatic 'gasp reaction' is the result of overinterpreting the other tests -  as will be shown,  lots of other standard constituents pass only the coordination test. The overemphasis on trees has another unhappy consequence for an idea explored in some of the Categorial Grammar and related literature.  This is the idea that strings can combine by infixation rather than just being adjacent (see, e.g., Bach 1979 for a CG view, and Pollard 1984 for an HPSG view). If so,  then trees in any case are not the appropriate representation for some sentences.  Yet the mere idea of an infixation operation is generally dismissed out of hand - purely because of the axiom that we need to represent things as trees!  Sec. 6 concludes with a brief note on the interaction of syntax and semantics and the unstated assumptions which underlie its role in 'figuring out trees' - assumptions that I actually agree with but that, in fact, the model of semantics presented in some of these books actually does not agree with.  

A major disclaimer before continuing.  While the remarks below are critical of parts of the books noted above,  I do not intend these  remarks to be critical assessments of any of these  books overall.  It might sound at times in this blog that I am almost doing a review of Syntax (and Introductory) textbooks. But that is not my intent - and if I make critical comments about a small part of one book that should not be taken as a 'review' of the rest of the book. In fact, I myself have at times used all three of the introductory linguistics books cited here (in various editions) in Intro classes and have used or will use parts of the some of the syntax texts as supplements to my own notes.  I am just focusing on the way constituent structure is taught and the non-negotiable and sometimes mistaken assumptions that ensue from that.  

2.  Beware the Persistent Fallacy

I begin by documenting the surprising fallacy which persists - sometimes outright, sometimes only by implicature (especially in exercises) - in most of the books I consulted.  As noted above, this is the fallacy of taking the constituent structure tests not just as sufficient tests for constituency but also as necessary ones.  Thus one often sees arguments to the effect that if some string shows a certain behavior it is a constituent.  Fair enough, but from there some of these books either directly or indirectly conclude/suggest that if the string doesn't pass that test it is not a constituent. That conclusion is not warranted: my favorite example to illustrate this fallacy is the relative clause  "which Sally recommended"  in (1): 

(1) I read the book which Sally recommended.

This passes none of the tests except coordination and perhaps deletion[4], but is uncontroversially a constituent.  The same holds for "ate an apple" in (2a) and "Sandy ate an apple" in (2b); these pass none of the tests except coordination (note: untensed VPs do pass some of these tests but tensed VPs do not, and CPs pass some but Ss without the "that" do not):

(2) a.  Lee thinks that Sandy ate an apple.  (e.g. *Ate an apple, Lee thinks that Sandy)

b.  Lee thinks that Sandy ate an apple.  (e.g., *Sandy ate an apple, Lee thinks that) 

But these facts are rarely noted. For example, while O'Grady et al 2017 do not come right out and present the tests as necessary and sufficient (they give pronoun replacement, movement and coordination - pp. 178-9),  the lack of pointing out that many constituents fail these tests would - I would think - certainly suggest this to a student.  After all, it is very common for speakers to pragmatically enrich 'if' statements to 'if and only if' statements, and so with no disclaimers to the contrary a student is likely to draw the conclusion that anything which fails a given test is not a constituent.  And indeed, the  exercises (6) and (7) (p. 212) in O'Grady et al  - while not literally committing to the view that these are 'if and only if' tests - would, in my opinion, suggest this to most students, especially given the lack of any cautionary comments.  (The reader can judge for themselves, their language is:  "Apply the substitution test to determine which of the bracketed sequences in the following form constituents" (and similarly in the next exercise for movement).)  We find the same problem in the introductory syntax text by Kim and Sells 2008 on pp. 19-22, where four of the usual tests are featured: clefts, substitution by a proform, fragment answers, and coordination.  While the examples they give that fail the test are standardly not considered constituents (but see my remark above and in Sec. 5), Kim and Sells make no mention of the fact that sometimes standard constituents can also fail the tests  - for other reasons.  And their exercise 3 on p. 33 again at least suggest that  these tests are 'if and only if' tests for constituency. The discussion/exercises in Carnie 2021 is similar.  On pp. 91-93 he presents four tests (treating the cleft test as a subcase of movement) but nowhere warns students that these are only sufficient and not necessary tests. And the relevant exercise on p 102 will, I suspect, lead students to conclude that certain things are not constituents on the basis of these tests; the language again asks students to use the tests  determine "whether the bracketted strings in each of the following sentences is a constituent". The discussion in Fromkin et al 2000 also conveys the idea that if something cannot, e.g., move by Topicalization then it is not a constituent (see especially p. 153). 

 Some of the books are more careful to point that the these tests are only sufficient conditions. But ironically, some of the very same books go right on to give an exercise which at best implicates and at worst downright presupposes that anything failing the test is not a constituent.  A particularly blatant example of this is found in Language Files 2016. The seemingly obligatory parade of tests happens on pp. 217-219.  In discussing the 'answers to question' tests, the book makes the outright mistake of taking the inability of something to answer a question to show that it is not a constituent (see Sec. 4 below for discussion of this test).   But at least in the Clefting section the book is careful to warn students against the fallacy: "So if a cleft is ungrammatical, it doesn't necessarily imply that the displaced expression does not form a constituent" (this is accompanied by an example of a bad cleft with something that is generally acknowledged to be a constituent).  But then - exactly 20 pages later (p. 238) - we find the following exercise  "Use the cleft test to show that "a highly motivated student" is not a constituent in this sentence". (The sentence in question contains "a highly motivated student of mine") Woops!! The rest of the questions in that exercise also seem to presuppose that the tests are necessary and sufficient conditions for constituency. 

The discussion in Koeneman and Zeijlstra is refreshing in that they not only are careful to point out (pp. 48-49) that the tests are just one-way tests, but also they go on to explicitly demonstrate the fallacy in general: "If it rains, you get soaked.  But if you are soaked, it does not automatically follow that it is raining.  Other factors may have caused this ...."  (p. 48).  But what happens in the exercises?  On p. 51 we find the following:  "Determine with the use of tests whether the italicized strings form a constituent."  Now maybe my criticism is nitpicky;  it may well be that this is simply poorly worded (note that this is similar to the crucial wording in Carnie 2021).  Given the warning earlier, I assume that what the book intends to ask (or should be asking) is something more like - "Determine with the use of tests whether there is evidence that the italicized strings form a constituent".  That wording would make clear that the answer in some cases will be "no - there is no evidence".   But the actual wording at least suggests that for each string one is supposed to give an answer of 'yes' or 'no' to the question of whether something is a constituent; I certainly suspect that many students would take it that way.  Here's a potential way to gauge the intent of the books' exercises.  Imagine substituting in a clear constituent for the italicized string in any of these exercises - i.e., imagine an exercise that also included in the list a sentence like:  "I saw the student who graduated last year" and where 'who graduated last year" is what is italicized in the exercise.  If one really wanted to stress to a student that failure on the relevant test just mean there is no evidence, it would be useful to have such a case in the exercises. But I have seen no book that includes something like that in the usual constituent structure tests exercises.   I have a similar quibble with a small section the discussion in Larson in pp. 109 - 110, although it needs to be pointed out that he does correct the misleading inference later.  But initially he presents the tests as necessary and sufficient conditions: this is explicit on p. 109 where he gives Principle 4 (P4) "If a string of words can be dislocated, then it is a constituent".  He then gives "Bart gave Maggie to Lisa" and follows this with the bad dislocation *"Maggie to Lisa Bart gave",  suggesting that this and another ill-formed case are bad because they "violate Principle 4". Of course these sentences can't possibly violate Principle 4 as it is clearly stated only as an 'if'  condition.  A violation is possible only if one mentally strengthens it to 'if and only if', as I suspect most students would do when told that there is a violation. In fact, the book goes on to say "it follows under P4 that ... "Maggie to Lisa" cannot [be disclocated]".  Of course no such thing follows from P4, since it is (correctly) simply an 'if'  condition. The subsequent exercise further reinforces the invitation for a student to make the standard fallacy mistake. To be fair, the misleading parts here are temporary; Larson later (pp. 136-139) carefully points out that the tests are only 'if' tests and not 'if and only if' tests, giving a detailed exposition of the very fallacy. And, refreshingly, he even takes one of them ('replacement by a proform'), shows a case where it fails for a clear constituent, and discusses why it fails (something rarely discussed - see sec. 4 below). Moreover, Larson (personal communication) says that this method is a design feature of the book as a whole: hypotheses are presented and then revised. So we might just have a disagreement about effective exposition, but this case does not boil down to presenting a hypothesis that is later revised. That - indeed -  is a normal way to do science. Rather, here the book simply presents mistaken reasoning, and then later undoes that (without direct reference to the earlier discussion). The only book of those I consulted that was both careful to point out that the tests are only sufficient conditions and did not have misleading exercises is Adger 2003.  His exercises are not asking to use the tests to determine whether something is a constituent (which invites a yes/no answer), but rather to use the tests to determine that two sentences have different constituencies. 

3.  The broader issue:  Losing  sight of the forest through the trees

There is a bigger picture consequence to the heavy emphasis in beginning  textbooks (and other places) on trees - where the seemingly obligatory section on constituent structure tests is a symptom of  this. And this is that it makes it easy to lose sight of the broader project: that of modeling speakers' (unconscious) knowledge of the 'grammar' of their language, and - pedagogically - how we discover that.  (In the case at hand in the bulk of the tree discussions in these books the particular language at issue is English, though some books contain tree discussion for other languages too.) And that consists of knowledge of the combinatory rules/principles;  a tree for a given sentence is nothing more than the representation of a proof of its well-formedness according to a particular system of rules  - this is in fact conveyed to some extent in some of the discussions - especially those that begin with phrase structure rules - but  my impression is that this point is lost in some of the other discussions. (Note, incidentally, that a tree can also be the representation of the compositional semantics, but see the discussion in Sec. 6; I will basically ignore the role of the semantics until then.) To the extent that speakers in some sense 'intuitively know' what is the right tree for a given sentence, that is the result of knowledge of the rules/combinatorics/principles defining each local bit of the tree as well formed. And crucially, a tree (as opposed to a sequence of trees or some other object) appropriately represents a sentence only under the assumption that the grammar proves something well-formed via (only) a set of context free phrase structure rules, which can well be listed in highly schematic form in the 'mental grammar'.  Indeed this is the view initially taken in almost all books (though almost always under other terminology), and it is a perfectly reasonable way to begin. But the consequence of adopting additional operations (e.g., movement, 'replacement by a proform', deletion - or, in some theories  - infixation rules) ultimately undermines the idea that an unambiguous sentence has a (single) tree representation, rather than a sequence of trees.  

A context free phrase structure rule is one that specifies that an expression of some category A is well formed in terms of a sequence of other expressions (including possibly single words) that are adjacent and occur in a particular order.  It is conventional to write such rules as, for example:  A -->  B  C .  This is a binary rule - with only two daughters, but a context free phrase structure rule in general can have any number of (including one)  symbols (or symbol) on the right side of the arrow, and it can also have single words on the right side of the arrow. The rule written as A --> B  C can equivalently be described in other ways, such as "A well-formed expression of category A can (NB: not 'must') consist of an expression of category B followed by one of category C".  Or: "An A may (NB:  not 'must') consist of a B followed by a C”.  Or:  "If there is a string (expression) of category B followed a string (expression) of category C then the whole thing is a well-formed expression of category A.  Finally, one can also recast this information as a local tree consisting of mother node A and two daughter nodes B and C (where B precedes C). These are all equivalent; none is more 'formal' than any of the others - they are merely different notations.  But only if the grammar contains nothing but statements of this kind (albeit possibly in quite generalized form) is a single tree an appropriate representation of a sentence.  (See fn. 3 and the discussion in Sec. 5 for the point that a given sentence - even an unambiguous one - can have more than one possible tree; by saying 'single tree' here I again mean to oppose this to a sequence of trees.)   

As suggested above, this is not to say that the hypothesis that a sentence can be represented by a tree commits one to a theory of grammar with simply with a list of phrase structure rules.  As far as I know, no current theory maintains that the grammar is simply a list of these rules -  all theories collapse the rules into generalized schemas, possibly with additional principles restricting what actually instantiates these schemas.  The most general such schema (and hence in need of many additional principles to restrict the actual instantiations of this) is - in current parlance - (External) "Merge".    Leaving aside definitions involving set formation or other more complex opertions  (such definitions are never used in any introductory book that I have seen) and leaving aside "Internal Merge" for the moment, there are various definitions; I take as point of departure the one in Adger 2003. This defines Merge as a "constituent building operation" and is illustrated by a local tree with Z (a variable over node labels) as mother and Y and Z as daughters (again, these are meant to be any node label).  Adger goes on to note that in theory Merge could join three or more objects but begins with the hypothesis that it joins only two, hence this immediately limits the possible phrase structure rules (equivalently, local trees) that instantiate "Merge" to only binary ones.  Note then that the Merge rule schema which could equally well be written as Z -->  Y X  where these are any node labels. Incidentally, Adger also intends for this to be a schema without order, just a schema saying the two combining expressions are adjacent.   Since X, Y, and Z here  are all variables over node labels,  then the above is of course equivalent to Z --> X Y, but the (presumed) intent of his discussion is that even when more information is put in there about the node labels as in, say, an instance of merging NPs and VPs to give S, the general schema would allow for either order of the daughters. The actual order would be fixed by other principles. But that still leaves us in the land of schemas over phrase structure rules: merging NP and VP to give S just abbreviates the two cases of   S --> NP  VP  and S --> VP  NP. Additional  principles specify that only the former exists English.  This idea was made explicit in Generalized Phrase Structure Grammar (Gazdar et al 1985) which made a distinction between Immediate Dominance rules and Linear Precedence rules. For example,  there would be a rule schema VP --> V, NP (the comma is intended to indicate no order) and a separate head first principle which is a principle on how to instantiate this schema.   

Merge is by no means the only phrase structure rule schema that is commonly used - it merely is the most general (and hence the most in need of additional principles to predict what can actually instantiate it in any given language).   A theory that begins with X-bar theory (or uses that to limit the possible instantiations of Merge) is also using (generalized) phrase structure rules, for the X-bar rules are also phrase structure rule schemas.  (It is odd that X-bar theory is often presented as an alternative to 'phrase structure grammar'.  For sure it is an alternative to a by now Straw Man theory in which each individual phrase structure rule is listed, but it nonetheless embodies the claim that there are phrase structure rules:  X-bar theory is simply a theory of the actual ones used in 'constructing' - i.e., proving well formed -  expressions in a language.)  Note that X-bar theory is also not incompatible with Merge, it is just a way of further refining the set of possible rules instantiating generalized binary "Merge".  The X-bar schemas themselves are also still too general. For example, what category or categories can instantiate 'Spec' position depends on what instantiates the sister of Spec (the same holds for the other two rules in the classic X-bar schemas), and so additional principles are still needed.  Other theories have different generalized rule schema for phrase structure rules; Generalized Phrase Structure Grammar (Gazdar et al. 1985) was a highly worked out theory along these lines.  And  many  versions of Categorial Grammar, for instance, have categories of the form A/RB and A/LB (the notation differs among different authors), where an expression of category A/RB  is something that combines to its right with an expression of category B to give a larger expression of category A.  This is simply equivalent to having a generalized rule:  A -->  A/RB    B. Similarly for the 'left slash'.  These two rules, incidentally,  are not meant as schemas that need to be restricted further (unlike a general schema A --> B  C); anything which instantiates these categories can combine in the a appropriate way.  Interestingly as pointed out above, some versions of CG also have infixation rules which therefore are not compatible with tree representations, we return to that pint below.  (See Bach 1979 and many works since; for further discussion see Jacobson 2014). 

In any case, the discussion here is intended to make the point that if one begins with the assumption that the only combinatory principle is binary (or n-ary) external "Merge" (i.e., a generalized rule of the form Z --> X Y), X-Bar theory, or some other phrase structure rule schema(s) then indeed, a tree is an appropriate representation of the combinatorics proving a particular sentence well-formed.   But an overemphasis on the idea that the fundamental way to represent sentences is by a tree (rather than a sequence of trees and/or some other object) obscures the fact that at the end of the day a theory with only phrase structure rule schemas is not  the theory assumed in most of these texts. In fact,  the minute one introduces 'movement', 'deletion' or 'replacement by a pronoun' as a test for constituency one is already committed to tree altering operations, and one is implicitly endorsing the view that using a single tree to represent many sentences is incorrect.  But this is not pointed out. Might/Should not a perceptive student be confused?   If movement is recast as Internal Merge the same basic point about a tree holds.  After all, combinatory rules that manipulate the internal structure of one of the input items (by possibly deleting or silencing it) also don't have good (honest) single tree representations.  This is because there is a difference between the internal structure at the input and the output of the constituent containing the internally merged item (even if nothing more than the addition of a feature suppressing the phonology of that constituent).   Sure, it can be drawn in various ways  in a tree like fashion - as can be multiple levels with the common use of arrows showing a movement path - but it should be clear that with these devices it makes little sense to talk about the constituent structure of a sentence  that has  undergone movement.  

A separate point is that the insistence on trees as the primary object of study (rather than the rule system which may or may not yield trees as the best representation) makes a theory with - for example - infixation rules as part of the combinatorics (see, Bach 1979 and much Categorial Grammar work since) completely unfathomable to students since it doesn't lend itself to tree representations.  (Not that the books discussed here endorse infixation, but there's no reason the idea should invoke such incredulity other than the fact that an infixation operation can't be represented in a tree.) Is it completely out of bounds to ask whether "look up" is a constituent in "Lee looked the information up"? The very inability to have 'discontinuous constituents' in tree representations leads to making an infixation combinatorial process 'off the table' and hence leads to multiple levels as the only solution for this type of case.  The inability to even contemplate anything but movement here is simply an artifact of over interpreting the primacy of trees and not seeing them just as representations of proofs according to one type of rule system.  

4.  Back to the constituency tests - what are they useful for, and what are useful tests?

To say some string is a 'constituent' means that there is some category A such that the grammar proves that the string in question is a well-formed member of category A  (where that category figures in the statement of other rules). Leaving aside semantic intuitions, the heart of determining the existence of such categories centers on distributional facts (some of the standard 'tests' are simply special instances of this more general point). The clearest arguments for such categories are not simply about distributional facts, but are distributional facts which can only be described by a recursive rule system.  As a demonstration of these points for a perfectly simple case, take one way we can determine that there is a single category (call it NP) for the expression "the large flea",  and that is the same category that we find in "the craziest idea I have ever heard",  "the potato head statue in front of the supermarket", "Grumpy the Dwarf", "wine in a plastic bag", "the idiotic belief that Napoleon is president", etc.   It is fairly easy to show this at the outset by beginning with a description of what sorts of things can go before and after a verb like "resemble". Whatever the ultimate details  turn out to be of the rule system that permits a certain set of strings to be before "resemble"  and a set to be after "resemble", it becomes quickly clear that the 'before' set and the 'after' set is the same, although some combinations require a bit of pragmatic imagination.  If we tried having a set of rules to predict what is in 'before' set and a separate set for the 'after' an obvious generalization is missed - and the grammar (no matter how the rest of the details go) becomes more complex for having to define that set twice. Note that tests such as 'movement' can be thought of  in exactly the  same way. Rather than relying on an axiom 'only constituents can move', one can easily teach this in a way as to make it clear why that is likely to be true and hence what that test is telling us.  Assuming here - for the sake of discussion - that the account of 'dislocated' constituent involves movement, one can merely posit a rule moving things (it doesn't even need to be written out), and note what it would take to describe the set of things that can move.  If we were to list a large group of strings the rule would be hugely complex (and impossible, in fact, given the point about recursion given below), but if there were a small set of categories that all the moveable strings belong to, only that set needs to be named in the description of the movement process.  This also makes clear that movement is not an 'if and only if'.  Most of the other tests are similarly ultimately just special cases of arguments from simplicity.  

The final nail in the coffin of not defining a single category for, e.g., all the  strings that can appear before and after "resemble"  is the existence of recursion.  One can easily show this in the NP case in virtue of sentences like "The potato head statue in front of the supermarket resembles the wine that resembles the craziest idea I have ever heard".  (Enough repetition of "resemble" can seem weird, but the same point can be made substituting in "is similar to" for "resemble".)  Or: "The potato head statue that resembles Mr. Peepers resembles the wine that resembles Grumpy the Dwarf". The existence of recursion seals the deal:  there is no way to describe (whatever the full form of the grammar is) the set of things that can go before and after "resemble" without invoking a single category and having the grammar define a well-formed set of strings of that category. Then one can just use that category in the statement of other rules.  

The most general case of recursion comes from coordination - since almost all categories can coordinate.  Thus there is a generalization that for (almost) every category X, strings of the form X and X are well-formed, and distribute just like other strings of category X.  This could be the result of a phrase structure rule schema (possibly itself derived from something more general) of the form X --> X and X (for X a variable over categories.).  Similarly for "or". There are other options:  if only binary rules are allowed this obviously cannot be right, but alternatively this could be broken down into two binary schemas. See Munn 1993 and Jacobson 2014 for two different (but somewhat converging) ways to do stated within in two different theories. [5]. Which of these is correct is not relevant to the immediate points here (and in either case  the exact schemas in question might themselves just be instantiations of much more general schemas).  Assuming that the binary solution is correct - giving structures of the form  [X [and X]] ) and  [X [or X]]  -  we can note that there still are a few categories which do not coordinate, as we don't find things like *"Lee [[or Sandy] and [or Jo]]"  and  *"Lee[and Sandy] or [and Jo]]".  This means that it appears that a failure to pass the coordination test is - unfortunately - still not definitive proof that something is not a constituent, but the vast majority of categories do pass, including the three 'holdouts' mentioned above (relative clauses, tensed VPs, and Ss).  

Many textbooks are reluctant to place too much emphasis on the coordination test -  not for the reason that many categories fail the test (as noted above, almost every category does pass the test) -  but rather the opposite. Under "standard" assumptions, lots of things that are taken not to be constituents do allow coordination.  In other words, there are many instances of what is often taken to be nonconstituent coordination (or structures which are the result of Across the Board movements).  Examples include those in (3), and it is easy to construct many others. 

(3) a.  Mary loves and John hates studying compositional semantics.

b.  Cap'n Jack restaurant served clams on Monday and lobster on Tuesday.

c.  Lee bought and Sandy cooked clams on Monday and lobster on Tuesday.  

d.  Lee believes and suspects that everyone else also believes that the earth is flat.

We return to these in Sec. 5.

But coming back to the usefulness (or not) of the standard 'tests', it could be illuminating when these are presented to have some discussion as to why some clear constituents fail them -  what are they actually testing for?  I am not sure that the answer to that is clear in all cases, but at least partial answers are available for some. Take, for example,  the case of fragment answers to questions. Leaving aside some complications due to NP and PP mismatches (see Jacobson 2016 for discussion), it is roughly the case that the category of the fragment answer must match the category of the question word (or, in some cases, of the larger Pied Piped expression containing the question word).  This matching requirement follows in various different ways. Under the ellipsis view of fragment answers in Merchant 2004 it follows because the fragment itself is moved and the remnant must match the remnant of the wh-question. In the view sometimes called 'direct matching' of Groenendijk and Stokhof, 1984 Ginzburg and Sag 2000 and Jacobson 2016 there is a question/answer discourse unit, and something counts as a 'answer' to the question only if the category of the question word or phrase matches that of the fragment answer. (In that case, the semantics puts the two together in a particular way so as to get the relevant inference.)  But either way, the possible categories for fragment answers will be limited by the categories for question words.  

It is thus not surprising that, for example, a relative clause cannot stand as a fragment answers to a question because there is no way to question that relative clause.  We would expect "who handed in all of the homeworks" to be a good answer to a question only if there were some question word - call it "whiprop" - whose syntactic category was the same as that of a relative clause and where (4) was a good question:

(4) *Whiprop did you pass every student?

meaning:  what is the property such that you passed every student that had

that property.

If we had such a question, then we would expect that "who handed in all the homeworks" to be a good answer.  So the space of fragment answers is bounded by what question words/phrases we have.  Of course this raises another interesting question:  why do we  have the question words (or phrases in Pied Piping cases) that we do? That is an interesting question but only indirectly bears on the issue of constituency. One can and should ask similar questions for each of the tests.   

5.  Why is this blog so long?  The agenda of a Categorial Grammarian

My interest in the role of constituent structure and constituent structure tests is driven in part by a specific agenda.  This is to lend plausibility and ward off a common objection to some of the interesting results and techniques in Categorial Grammar - a framework rarely taught and arguably underappreciated.  This is not the place to give a tutorial on CG nor on many of its facets, but just to highlight the one regarding constituency.  (I do wish to note that I think the main advantage of CG is that it provides a beautiful and smooth fit between the syntax and the compositional semantics, but since this is not a piece about semantics I leave it to the interested reader to explore that further, see Jacobson 2014 for a relevant textbook.)  The one bit of background needed here is the idea that syntactic categories are encodings of distributional facts. There is a small set of primitive categories, and a recursive definition of other categories which basically encode what argument an item takes (in general - what will be its sister) and what will be the result (the mother category). As noted in Sec. 3, this then allows extremely general phrase structure rule schemas. Thus an expression of category A/RB is something that wants a B on its right to give an A, and A/LB wants a B to its left to give A.   (There might also be categories which say that the expression is a circumfix - i.e. the material will take something as an infix, and categories which would say that something is an infix.)   The directional features on the slashes are not listed item by item, but given by general rules (see Jacobson 2014 Chapter 6 for discussion).  

There are many versions of Categorial Grammar, but following a tradition advanced in the 1980s by, e.g., Steedman 1987, Dowty 1988 and others, the syntactic (and semantic) combinatorics are such that many unambiguous sentences have many ways to be put together (giving a single meaning).  See also related work in the Type Logical Grammar tradition.  Thus a simple sentence like "Lee loves bananas"  can be put together in the  usual way:  "loves" and "bananas" combine to form a constituent (VP - or, in CG terms S/LNP) which then combines with the subject "Lee".  But there are other operations according to which "Lee" and "loves" can first combine (three possibilities for this are outlined in Jacobson, 2014) such that "Lee loves" is a constituent which is expecting to find an NP on its right to give a S.  The way the semantics and syntax work together means there is no problem getting the right argument structure: "Lee loves" denotes the set of things that Lee loves (while "loves bananas" in the other analysis denotes the set of lovers bananas).   This automatically allows for right node raising cases like that in (3a) above under the assumption that two expressions of the same category can conjoin to give a third of that category.  Dowty 1988 showed that the same types of techniques allow "clams on Monday" to combine to become something wanting a transitive verb to its left to give a VP - i.e., (S/LNP)/L((S/LNP)/RNP).

A common reaction to the idea that "Lee loves" can be a constituent is complete disbelief:  does the person advocating this have the slightest idea of what we should have learned in Introductory Linguistics/Syntax? Don't we have lots of evidence that in "Lee loves bananas",  "loves bananas"  is a constituent - i.e., that there is some category (VP) and the grammar proves "loves bananas"  as a well-formed member of that category? Yes of course we do - such evidence comes from distributional facts including recursion and is often presented early on in a syntax text. (See, e.g., Larson 2000 who nicely lays out this evidence.) But a careful consideration of just what the evidence shows is that it only shows that the sentence has one such analysis - not that that is the only analysis!  That this is all it shows is almost never pointed out.  Continuing with this point, take the claim that "do so" is a VP proform (I have questioned that in fn. 2 but that is irrelevant here, for the sake of discussion let us assume that the textbook wisdom on that is correct.)  All that shows is that there is a proform of some category and that "loves bananas" can be analyzed as a well formed exprssion of that  category.  None of this shows that there is not also another (semantically equivalent) analysis by which "Lee loves"  is an expression of some other category - albeit one with limited distribution.  

It is only  if we make the assumption that each unambiguous sentence has just one analysis does it follow that "Lee loves"  cannot also be a constituent. And I think that this is another assumption that is at least implicitly made in many books, and which  becomes sufficiently ingrained that students  (and many others) rarely even notice that showing that "loves bananas" can be a constituent hardly means that that is the only analysis.  This assumption creeps in when, for example, a particular tree is shown in which some sequence of words is not a constituent from which it is concluded that that sequence is not a constituent in the sentence in question. All that follows is that the sequence is not a constituent under the analysis shown by that particular tree, but it doesn't mean there aren't other ways that the grammar proves the sentence well-formed. In fact, we return now to the point made in fn. 3 - it seems to me that any theory really does have to be committed to the view that some sentences have more than one analysis even under a single meaning.  Given "Roses are red and violets are blue and irises are white", it would take a lot of extra work and machinery - under almost any conceivable view of coordination - to block two analyses of this (which are semantically equivalent).  The two analyses are: (i)  [[roses are red and violets are blue] and [irises are white]]   and (ii) [[roses are red] and [violets are blue and irises are white]].  (Whether the rules for coordination are binary or not makes no difference here.)   If we substitute in "or"  for one of the occurences of "and"  we find that the fact that there are two analyses is a happy result, since "Roses are red or violets are blue and irises are white"  is ambiguous in just the way the two possible brackettings predict.  It just happens that the semantics of "and" is associative, and so no ambiguity happens to arise in the case where there are two occurrences of "and".   So as far as I can see, all or at least most theories really do allow for the possibility of an unambiguous sentence having two analyses (as I say - it would take a lot of extra machinery to preclude this),  and the notion that "Lee loves bananas"  has two analyses should not be such a shock. But it often is a shock, due to the common talk by which an unambiguous sentence has a (single) tree.  I personally think that if trees were  more clearly presented as the end result of a rule system with respect to a given sentence, it would be clearer at the outset that the rule system could work in two different ways to prove a sentence well-formed even under a single meaning.  

Returning to the predictions of Categorial Grammar (and related theories), depending on the exact version that one adopts one can come up with a grammar in which any substring of a given sentence is a 'constituent' and should therefore pass the coordination test.  We noted above that the categories that can coordinate will have to have some limits; assuming a binary view of coordination we saw above that there are constituents that still cannot conjoin. But the flexibility of constituency combined with the assumption that only 'constituents' can conjoin leads to the possibly surprising prediction that almost any substring can pass the coordination test. And while it is easy to construct cases that appear to threaten this generalization, they often have other reasons for being bad. For example, Adger 2003 (p. 125) gives some examples of bad cases, but these can all be tweaked into similar sentences which are good.  A bad example is *"Owly hated [the evil] and [the wise] eagle" - but with a bit of prosody and getting the right focus and other tweakings we can construct the parallel good sentence:  "Owly hated (both) THE EVIL and A WISE eagle"  (caps here used to indicate contrastive stress).   Granted this prefers to have the addition of "both" there, but that should not affect the main point.  Or his example (79) *"Owly hated the and loved the bat" is easily fixed to remove the strange repetition of "the"  - giving  "Owly hated two and loved three bats".  If we take seriously that conjunction is a test for constituency and have various ways to put these together, these results then become unsurprising.  

But wait says the skeptic again: what about all those standard constituent tests that we learned about in Intro (Linguistics or Syntax)?  "Lee loves" passes almost none of them except coordination (Ditto for all the 'nonstandard' constituents discussed above). Of course it is the very reaction (which I literally have heard) that prompts this blog in the first place.  The reaction stems simply from bad teaching and the fallacy discussed at length in Sec 2. As stressed at several points in this blog, plenty of 'standard' constituents also fail the tests.  Once again, we would ultimately want an explanation for this (for both the case of "Lee loves" and for the case of the other constituents), but again we can discover this only by looking by case to see what these tests are testing. Jacobson 2014 discusses this in greater detail:  there it is shown that there is good reason why we would not expect to find an anaphor of category S/RNP (the category of "Lee loves"). The remarks there extend to predict that we would not find an anaphor of the category of something like "clams on Monday".  And the limited distribution of these 'funny' constituents also follows from the way that the word order generalizations work.  As to why "Lee loves" cannot be the answer to a question - this is the same as the earlier discussion about relative clauses: there is no question that this can answer. 

There is another piece of evidence often used to determine constituent structure:  c-command, as it plays a role in the statement of, e.g., Weak Crossover, the 'binding' of reflexives, the distribution of Negative Polarity items, etc.  The claim that "Lee loves bananas" can be put together by first combining "Lee" with "loves" to give wreaks havoc with standard notions of c-command. As such, this is another reason why the flexible constituent structure possibilities discussed above draws gasps: how can one possibly propose that there is an analysis by which "bananas" c-commands "Lee"?  Won't this completely ruin all generalizations based on c-command?

The answer is no.  This is another great example of losing sight of the forest through the trees - and of taking the trees as having some primacy over a deeper understanding of the phenomena in question.  All of the phenomena above should have something to do with the way the semantics is put together, and one might hope that any c-command generalizations are the result of something else. Put differently, it seems highly unlikely that the grammar contains, for example, a statement to the effect that a Negative polarity item must be c-commanded by a downward entailing operator.  If one takes the general line begun in Kadmon and Landman 1993 or many later variants of this, downward entailing environments are ones where semantic strength is reversed, and that remains the case regardless of the order in which things are put together. The hope is that distribution Negative Polarity Items follows from their meaning combined with the  strength reversing property (and perhaps additional principles). Weak Crossover effects can also be the result of the way 'binding' works, where c-command need not be explicitly built into the grammar in any way. For one way to accomplish this, see the variable-free account of binding in Jacobson 1999 and subsequent work: the fact that there is no 'binding' relationship possible in *"Hisi mother called every fourth grade boyi"  follows from the basic argument structure of "call" and the way that interacts with the operation that effects 'binding'.  The order in which the combinatorics is put together makes no difference. 

6.  A note about semantics 

It is also typical for introductory books to point to ambiguous sentences (or phrases) like "The spy saw the man with the binoculars" or "old dogs and cats"  as a help in determining constituent structure,  since in each case these should have two different trees. Such discussions generally appeal to common sense semantic intuitions as a way of constructing these trees - and thereby (the unspoken part)  - as a way of determining the possible rules that instantiate generalized schema.  This is fine, but I always find it striking that this general strategy assumes something about the syntax/semantics interface but never makes that assumption explicit - seeming to take it as common sense. But actually this view is inconsistent with the model of syntax/semantics interface taken by the end of the some of these books -  at least those that mention rules producing a Logical Form which is the input to the interpretive procedure.  (Those books include Adger 2003 although only in passing, Carnie 2021 and Koeneman and Zeijlkstra 2017.)  Notice that the apparent common sense obsesrvations about ambiguity only make sense under the assumption that the trees sanctioned by the initial p.s. rule schema (X-bar theory, Merge, or whatever) are what is interpreted!   In fact, the  simplest way to conceive of this would be not to have the syntax 'produce' trees which are then interpreted, but rather to adopt Direct Compositionality (yes, another agenda-driven plug) whereby each phrase structure rule (schema) is coupled with a semantic rule giving the meaning of the mother in terms of the meaning of the daughters. (Under this view, there is no level which is interpreted: interpretation proceeds in tandem with the syntactic building operations.)  This is exactly the view from Montague 1970 and other works by Montague and which is taken in Categorial Grammar, among other theories (e.g., Gazdar et al. 1985 also took this view as just one example).  I personally think it is right, but generally no comment is made about how the syntax and semantics explicitly work together, and what 'trees' have to do with meaning.

In sum, then,  if one is going to introduce ambiguity and more generally semantic intuitions as a way to justify a particular set of rules (i.e., a particular constituent structure), it would be helpful to make explicit exactly the assumptions about how semantics work. I don't know that this is ever done, and in some cases these assumptions are discarded by the end of the books, but no comment is made on how that may or may not effect the initial 'evidence' given for constituent structure.   

7. Conclusion

My main goal has been to argue that much of the discussion about trees in so many introductory  books (including the semantic discussions, constituent structure, the role of c-command etc.) potentially obscures the details of the model of grammar being assumed, the project of modelling that, a search for deeper explanations as to why things work the way they do.  This has  led to certain ideas about syntax being 'non-negotiable' truths when indeed they should certainly be up for discussion.  Not having embarked on the difficult project of writing a syntax textbook myself,  I should - and to some extent do - feel loathe to criticize others for what is a hugely difficult territory to navigate, since obviously oversimplifications are needed at the outset. Nonetheless, the result of ignoring these issues means that the problems I am addressing here go beyond just the teaching of syntax itself, for some of the mistaken or questionable conclusions that result have become very deeply ingrained in the field as a whole.  

Footnotes 

[1] Incidentally, my impression is that trotting out the battery or some subset of constituent tests was not always de rigeur in introductory books but has become so only in the last few decades.  To check this intuition (albeit not very systematically) I looked at four earlier syntax texts - Baker 1978 , Soames Perlmutter 1979,  van Reimsdijk and Williams 1986 and Napoli 1993.   Sure enough,  none of these books do this insofar as I could find.  Baker does discuss the 'proform' replacement test towards the end of his book,  but only as a way to discussing that phenomenon itself, not as a means of testing for constituents.  He assumes a certain constituent structure, and used that argue for a specific conclusion about  conditions for 'one-replacement'.  

[2]  I think the 'replacement by a proform' test is problematic especially in the case where  "do so" is treated as a single 'proform' that 'replaces' certain VPs. Does it make sense to call  "do"  here part of a 'proform'?  Is it not the same main verb do that takes NP but not VP complements:   "do nothing", "do something", "do several thing",  "What he had done was he took out the garbage" (specificational sentence with free relative in precopular position), "What he had done was stupid" (predicational sentence with free relative in subject position).  In all such cases the complement or the 'missing' material (trace, gap, whatever)  is arguably an NP.  If this is the same veb "do", then "so" is the proform (not "do so").  It does remain  unclear what is the category of "so"  since it  does not behave as like an NP (note:   *"Lee changed the lightbulb, and Sandy either did so or nothing"  vs. "..and Sandy either did so or did nothing").  Nonetheless, given that there is main verb "do" with the same semantic restrictions as the "do" that occurs with "so", it is reasonable to think they are the same verb.  Thus note:  *"Lee knew the answer and Sandy did so too", alongside *"Lee did several things, including know the answer".  If this is main verb "do" then "so" is the anapahor; it is also a CP anaphor but with rather limited distribution ("think so", "hope so",  but not *"regret so", etc.)  Thus the "do so" construction is complex to analyze, but in any case calling "do so" an 'proform' that 'replaces' a VP is a questonable.

[3]   By saying that books convey the idea that an unambiguous sentence is represented by a (single) tree I mean this simply in opposition to the representation of a sentence as a sequence of trees, or some other object.  I know of no theory that actually requires an unambiguous sentence to necessarily have only one possible analysis. Whatever one adopts as the fine grained analysis of conjunction, a case like "Roses and red and violets are and lilies are white" will automatically have two possible brackettings which - under most assumptions of how the semantic works - will have the same truth conditions. This will be discussed more  in Sec. 5.  The key point here is that what is generally conveyed is that a sentence has as its analysis a single tree as opposed to a sequence of trees or some other object.

[4]  Thus Collins 2015 proposed that  relative clause can delete.  

[5]  The rules could be such that the 'structure' assigned to a string of the form X and X is [[X and] X]  or the rules could  end up analyzing this as [X [and X]].  Both Munn 1993 and Jacobson 2014 opt for rules of the latter type, as these would follow form other general facts about English word order.  

References

Adger, D. 2003. Core Syntax: A Minimalist Approach. Oxford: Oxford University Press. 

Bach, E. 1979.  "Control in Montague Grammar".  Linguistic Inquiry 10,  515-31.

Baker, C.L. 1978.  Introduction to generative transformational syntax. Englewood Cliffs, NJ: Prentice-Hall.  

Carnie, A. 2021.  Syntax: A Generative Introduction,  4th edition. Malden MA: Wiley Blackwell. 

Collins, C.  2015a. Relative Clause Deletion. In Ángel J. Gallego and Dennis Ott (eds.). 50 Years Later: Reflections on Chomsky’ Aspects. Vol. 77 of MIT Working Papers in Linguistics, Cambridge, MA. 57-69.

Dawson, H. and M. Phelan (eds.) and Dept. of Linguistics, Ohio State University, 2016. Language Files: Materials for an Introduction to Language and Linguistics 12th edition.  Columbus: Ohio State University Press. 

Dowty, D. 1988.  "Type Raising, Functional Composition, and Non-Constituent Conjunction".  In R. Oehrle, E. Bach, and D. Wheeler (eds.),  Categorial Grammars and Natural Language Structures. Dordrecht: Reidel, 153-197.

Fromkin, V. et al., 2000.  Linguistics: An Introduction to Linguistic Theory.  Malden MA: Blackwell Publishing.

Gazdar, G., E. Klein, G.K. Pullum and I. Sag 1985.  Generalized Phrase Structure Grammar.  Oxford: Basil Blackwell.

Ginzburg, J. and I. Sag 2000.  Interrogative Investigations: The form, meaning, and use of English interrogatives.  Stanford CA: CSLI Publications. 

Groenendijk, J. and M. Stokhof 1984.  Studies in the semantics of questions and the pragmatics of answers.  Amsterdam: University of Amsterdam Dissertation.  

Jacobson, P. 1999.  "Toward a Variable Free Semantics".  Linguistics and Philosophy 22,  117-185.

Jacobson, P. 2014.  Compositional Semantics: An Introduction to the Syntax/Semantics Interface.  Oxford: Oxford University Press.

Jacobson, P. 2016. "The Short Answer: Implications for Direct Compositionality and Vice-Versa".  Language 92,  331-375.   

Kadmon, N. and F. Landman 1993.  "Any".  Linguistics and Philosophy 16, 353-422.

Kim, J-B and P. Sells 2008.  English Syntax: An Introduction.  Stanford: CSLI Publications.  

Koeneman, O. and H. Zeijlstra 2017.  Introducing Syntax. Cambridge (UK): Cambridge University Press.  

Larson, R. 2010.  Grammar as Science.  Cambridge MA: MIT Press.

Merchant, J. 2004.  "Fragments and ellipsis".  Linguistics and Philosophy 27, 661-738. 

Montague, R. 1970.  English as a formal language.  B. Visentini (ed.), Linguaggi nella Società e nella Tecnica. Milan: Edizioni di Comunità, 189-224. 

Munn, A.  1993.  Topics in the syntax and semantics of coordinate structures. Doctoral dissertation, University of Maryland, College Park.

Napoli, D.J. 1993.  Syntax: Theory and Problems.  Oxford: Oxford University Press.

O'Grady, W. et al.,  2017.  Contemporary Linguistics: An Introduction, 7th edition. Boston/New York, Bedford/St. Martin's Press.

Pollard, C. 1983.  Generalized Phrase Structure Grammars, Head Grammars, and Natural Language.  Ph.D. Dissertation, Stanford University.  

van Riemsdijk, H. and E. Williams,  1986.  Introduction to the Theory of Grammar. Cambridge MA: MIT Press. 

Soames, S. and D. Perlmutter.  Syntactic Argumentation and the Structure of English. Berkeley CA: University of California Press.   

Steedman, M. 1987. "Combinatory grammars and parasitic gaps".  Natural Language and Linguistic Theory 5, 403-439.







2 comments:

  1. Paul HirschbühlerJune 15, 2023 at 2:10 AM

    This comment has been removed by a blog administrator.

    ReplyDelete
  2. Paul HirschbühlerJune 15, 2023 at 5:25 AM

    Hello Polly,

    Regarding do so, and its French equivalent le faire (en faire autant, faire pareil, etc), the best reference is Philip H. Miller:

    Miller, Philip 1992 Clitics and Constituents in Phrase Structure Grammar. thesis. chap. 3 The Proform Criterion

    Miller, Philip 1990 Pseudo-gapping and do so substitution. CLS 26.1, 293-305
    (basically his 1992, but developed).

    The CLS volume is available at the address below:
    https://drive.google.com/.../1o09xbZ2fsvXmxTDe0P84hPlcoop...

    I am sending you copies of chapters of a never completed textbook of Intro to French Syntax (L'universel et le particulier dans la langue) by Marie Labelle and me. It started in the mid 80s by me as I was teaching the French Syntax course and Marie joined me when she also taught that course at UQAM (U. du Québec à Montréal) in the 90s. After we retired, I tried to update the text but got discouraged. I am sending you chapters 3, 4 and 6 where we repeat over and over again that the fact that a string does not show constituent behaviour according to the tests does not prove that it is not a constituent, because other factors may prevent it to demonstrate constituent behaviour, and sometimes we indicate (at least descriptively) why we think that it behaves so (ahah,'so' again). I have highlighted the relevant pages.

    Paul Hirschbühler

    ReplyDelete

Note: Only a member of this blog may post a comment.