Monday, May 13, 2013

Pullum’s NLP Lament: More Sleight of Hand Than Fact

My first reading of both of Pullum’s recent NLP posts (one and two) interpreted them to be hostile, an attack on a whole field (see my first response here). Upon closer reading, I see Pullum chooses his words carefully and it is less of an attack and more of a lament. He laments that the high-minded goals of early NLP (to create machines that process language like humans do) has not been reached, and more to the point, that commercial pressures have distracted the field from pursuing those original goals, hence they are now neglected. And he’s right about this to some extent.

But, he’s also taking the commonly used term "natural language processing" and insisting that it NOT refer to what 99% of people who use the term use it for, but rather only a very narrow interpretation consisting of something like "computer systems that mimic human language processing." This is fundamentally unfair.
In the 1980s I was convinced that computers would soon be able to simulate the basics of what (I hope) you are doing right now: processing sentences and determining their meanings.
I feel Pullum is moving the goal posts on us when he says “there is, to my knowledge, no available system for unaided machine answering of free-form questions via general syntactic and semantic analysis” [my emphasis]. Pullum’s agenda appears to be to create a straw-man NLP world where NLP techniques are only admirable if they mimic human processing. And this is unfair for two reasons.

One: Getting a machine to process language like humans is an interesting goal, but it is not necessarily a useful goal. Getting a machine to provide human-like output (regardless of how it gets there) is a more valuable enterprise.

Two: A general syntactic and semantic analysis of human language DOES. NOT. EXIST. To draw back the curtain hiding Pullum’s unfair illusion, I ask Pullum to explain exactly how HUMANS process his first example sentence:
Which UK papers are not part of the Murdoch empire?
Perhaps the most frustrating part of Pullum’s analysis so far is that he fails to point the blame where it more deservedly belongs: at linguist themselves. How dare Pullum complain that engineers at Google don’t create algorithms that follow "general syntactic and semantic analysis" when you could make the claim against linguists that they have failed to provide the world with a unified "general syntactic and semantic analysis" to begin with!

Ask Noam Chomsky, Ivan Sag, Robert van Valin, and Adele Goldberg to provide a general syntactic and semantic analysis of Pullum’s sentence and you will get four vastly different responses. Don’t blame Google for THAT! While commercial vendors may be overly-focused on practical solutions, it is at least as true that academic linguists are overly-focused on theory. Academic linguists rarely produce the sort of syntactic and semantic analyses that are useful (or even comprehensible … let alone UNIFIED!) to anyone outside of a small group of devotees of their pet theory. Pullum is well known to be a fierce critic of such linguistic theory solipsism, but that view is wholly unrepresented in this series of posts.

In his more recent post, Pullum insists again that commercial NLP is tied to keyword searching, but this remains naïve. Pullum does his readers a disservice by glossing over the now almost 70 years of research on information theory underpinning much of contemporary NLP.

Also, Pullum unfairly puts Google search at the center of the NLP world as if that alone represents the wide array of tools and techniques that exist right now. This is more propaganda than fact. He does a disservice by not reviewing the immense value of ngram techniques, dependency parsers, Wordnet, topic models, etc.

When he laments that Google search doesn’t "rely on artificial intelligence, it relies on your intelligence", Pullum also fails to relate the lessons of Cyc Corp and the Semantic Web community which have spent hundreds of millions of dollars and decades trying to develop smart artificial intelligence approaches with comparatively little success (compared to the epic scale success of Google et al). In this, Pullum stacks the deck. He laments the failure of NLP to include AI without reviewing the failure of AI to enhance NLP.

I actually agree that business goals (like those of Google) have steered NLP in certain directions away from the goal of mimicking human language, but to dismiss this enterprise as a failure is unfair. It may be that NLP does not mimic humans, but until [we] linguists provide engineers with a unified account of human language, we can hardly complain that they go looking elsewhere for inspiration.

And for the record, there does exist exactly the kind of NLP work that attempts to incorporate more human-style understanding (for example, this). But boy, it ain’t easy, so don’t hold your breath Geoff.

If Geoff has some free time in June, I recommend he attend The 1st Workshop on Metaphor in NLP 2013.


Anonymous said...

"One: Getting a machine to process language like humans... "

We may not need a machine to mimic human language behavior. The telescope is a good example for me; despite what the goal was, the telescope delivered a tool that extended the functionality of the eye. That seems pretty consistent with the history of technology... tools/machines don't so much mimic as extend human ability. In this regard, NLP has done a lot (and Pullum does not give credit where credit is due). Never before has it been so easy to parse millions of words from hundreds of texts in a few minutes; and in the hands of an astute analyst arguments/claims can easily be (dis)confirmed -- I'm thinking especially of some of the blog posts by Mark Lieberman about Presidential speeches.

I sympathize with Pullum about many things. But I think it's also about perspective. The logic oriented rule-based approach had a crack at things, and now the statistical approaches are having their moment. It's a nice dance and things will come back 'round. Many people, including Wavii -- just bought by Google, are realizing that Ontologies and contextual classification of event/situation semantics is really important... ontologies are generally rule-based.

I agree too, linguists are to blame for inconsistent theories. I think it's worse than not being able to provide a consistent view... at times it feels like nobody wants a consistent theory.

I got advice from Paul Postal in my first year of grad school (after reading "The Vastness of Natural Languages" (1984)) not to push too hard on the statistical approaches I had been inspired to pursue by his book in my, largely Minimalist, theoretical courses. He was right too, and that was sad that a theory and its practitioners were so far gone that it was a bad move for your career to push for alternative approaches that aligned too much with the "enemy". It's a big reason why I left academia.

Anonymous said...

How come you do not mention IBM Watson as a success ?

That group changed their whole thinking to make it happen. Academics were mainly performing a fancy editorial task on retrieval results. There was a main loop in QA before Watson.

Both CMU, IBM and Cycorp contributed in making it happen.

You should definitely not go anywhere without mentioning this as a success in NLP.

Yes, it's a pragmatic one.
But still, it's a success.

Chris said...

Anonymous: I agree that IBM's Watson is a good example. Several commenters on Pullum's original post mentioned it, so I guess I didn't feel the need.

Chris said...

Joshua, I think we're going to see a resurgence of rule-based approaches as we begin to reach the limit of what corpora can tell us. I'm not convinced corpora are going to give us pragmatic inferences or metaphor, but I'm open minded. I'm also very curious to see if Liberman responds to Pullum. They've known each other a long time.

Anonymous said...

A dialogue between Pullum and Liberman would be good for everyone. I also think Peter Norvig has a lot to offer in this response:

I don't see either approach ("statistical", "logical") winning out over the long haul. A hybrid theory will hopefully emerge that the majority can rally around. And some people are thinking in this terms... for example Noah Goodman ( and the Church programming language

Chris said...

Joshua, yeah that Norvig piece is legend, for sure. But I remain baffled by what Pullum's general point or goal is. Hopefully his next post on Monday will clarify.

Putting the Linguistics into Kaggle Competitions

In the spirit of Dr. Emily Bender’s NAACL blog post Putting the Linguistics in Computational Linguistics , I want to apply some of her thou...