Saturday, September 20, 2014

Subjects as specifiers

While reading Syntactic Theory by Sag, Waslow, & Bender, I once again ran across the idea (mentioned previously here), that the subject of a clauses (where Clause = Subj + VP) is analogous to the Specifier of an NP (where NP = Specifier + NOM). Here, however, the example given in (34) on p. 64 is unambiguously comparing a clause with an NP:
  1. We created a monster
  2. our creation of a monster
They use this idea in their constraints-based grammar to reduce the number of rules needed. Thus instead of defining NP one way and Clause another (they use S, but I'll stick with Clause), they can use one feature specification to capture both as in figure 1:

Figure 1. A feature structure representing either Clause or NP.


(At this point, things get very technical.)

This formalization defines a phrase which can be a clause or an NP (or potentially other things). The 1 (or indeed any number) is a tag that can be shared across feature specifications. (When these tags are used, the value they represent is shared across all features specifications using the same tag. They make a point of saying that these are not "inherited", nor do they "percolate" up; there is no directionality to them.) The value of the HEAD feature may be any lexical category (e.g., noun, verb, adjective, etc.). the VAL feature is the valence, which, at this stage in the grammar, has two features from the valence categories (val-cat): COMP(lement)S and SP(ecifie)R. The possible vlaue of COMPS include: intransitive (itr), strict transitive (str), and ditransitive (dtr). SPR may be set to + or -, indicating where - means "categories need a specifier on their left" (but do they always "need" one?) and + means they "do not, either because they label structures that already contain a specifier or that just don't need one" (p. 64).

Let's look first at the case where the structure feature represents a Clause. Clause is defined in this grammar by the structure feature shown above in the case where, the 1 = verb, making the HEAD feature = verb. Clauses do not take complements of any kind, so COMPS = itr (intransitive). The clause will have a subject, which is where the similarity with NPs comes in. The subject is considered to be a Specifier within the clause. Thus, SPR = +.

In this case, the daughter node would be a VP defined as the following feature structure where 1 = verb:

Figure 2. A feature structure representing either VP. 


The only difference between the feature structures for Clause and VP is that the SP(ecifie)R value has changed from + to -. That is to say that, in a Clause, the specifier is present, while in a VP it is absent; there is no subject. VPs, like clauses, don't take complements (though they include them as V + COMP) so again COMPS = itr.

Now let's consider the case of the feature structure in figure 1 where the HEAD feature = noun. Here, again, the value of HEAD is shared by the daughter node, which will be either NOM or N. Like Clauses, NPs do not take complements (for now, we'll assume they don't take any kind of complement), so COMPS =  itr. An NP either has the Specifier it needs or it doesn't need one, so SPR = +.

Here the daughter node would be either a N or a NOM. Let's consider the second case, for which the feature structure is the same as that in figure 2. The only difference from a VP, is that 1 = noun not verb. Like VPs, NOMs don't take any kind of complement. And also like VPs, NOMs need a Specifier so SPR = -. (Again, "need" seems to overstates things, but this may be dealt with later.)

So, I think they make a very good case for this analysis at this stage in the development of their grammar. I'm interested to see how this pans out when we consider more complex types of complementation, or verbless clauses.

PS, in the rest of the book, type names such as "val-cat" are often omitted (e.g., p. 65 (37)) but sometimes included (e.g., p. 68 (45)). I wondered whether this had any meaning or whether it was simply to save space and reduce clutter, so I emailed Tom Wasow, one of the surviving authors, and asked. He got back to me in under an hour saying that indeed it was simply a typesetting decision.


References
Sag, I. A., Wasow, T., & Bender, E. M. (2003). Syntactic theory: A formal introduction (2nd ed.). Stanford, CA: Centre for the Study of Language and Information.

The book uses specifier for the function, and I'm inclined to do so as well in this blog, instead of determiner, which is used by CGEL.
This is my first ever attempt to use TeX to do layout, so it took me about three hours of experimenting to get the diagram to this point, but there are still a few problems. If you know how to fix these, please, leave me a comment:
  • VAL and SPR should be left aligned
  • The 1 should be small and enclosed in a square
  • The minus sign in val-cat should be a dash without spaces around it.  
Here's my code:

\begin{bmatrix}
phrase \\
\mathrm{HEAD} & 1\\
\mathrm{VAL} & \begin{bmatrix}
\mathit{val-cat} \\
\mathrm{COMPS} & \mathrm{itr} \\
\mathrm{SPR} & -
\end{bmatrix}
\end{bmatrix}

No comments:

Post a Comment