Saturday, March 25, 2017


A huge issue in linguistic research is what I will call canalization. The basic problem is that whatever set of data and set of generalizations that have been uncovered so far, it is very difficult to get off of that track.

And if you do not get off of that track, at most you discover details and corrections about what has already been discovered. Of course, in a way this is the root of all linguistic progress. In reading a recent overview article on deletion, I was pleasantly surprised a the progress that has been made in that domain essentially by making small incremental improvements to the sum total of what is already known. A similar process led to advances in our knowledge of locality, binding, ACD, etc. However, this process makes it easy to overlook huge areas. This is one of the reasons why it took until 2012 for somebody to write a monograph on imposters, which is a huge topic, with many empirical lines to investigate. A related issue is dismissing some area as natural or as following from X (X = semantic, pragmatics, processing, etc.) without trying to work things out in detail and see what the constraints are first. We got this a lot in response to the imposters book. E.g, the reason that pronoun is 1SG is that it has semantic agreement. As if it is obvious and clear and natural what semantic agreement is. This is also why non-generative approaches to analysis are so dangerous. They are quite willing to chalk things up to some functional principle without working through the system first.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.