Friday, May 31, 2013

Blame the linguists!

Pullum has let me down. His latest NLP lament isn’t nearly as enraging or baffling as his previous posts.

I basically agree with his points about statistical machine translation. I even agree with his overall point that contemporary NLP is mostly focused on building commercial tools, not on mimicking human language processes.

But Pullum offers no way forward. Even if you agree 100% with everything he says, re-read all four of his NLP laments (one, two, three, four) and ask yourself: What’s his solution? His plan? His proposal? His suggestion? His hint? He offers none.

I suspect one reason he offers no way forward is because he mis-analyzes the cause. He blames commercial products for distracting researchers from doing *real* NLP.

His basic complaint is that engineers haven’t built real NLP tools yet because they haven’t used real linguistics. This is like complaining that drug companies haven’t cured Alzheimer’s yet because they haven’t used real neuroscience. Uh, nope. That’s not what’s holding them back. There is a deep lack of understanding about how the brain works and that’s a hill that’s yet to be climbed. Doctors are trying to understand it, but they’re just not there yet.

He never addresses the fact that linguists have failed to provide engineers with a viable blueprint for *real* machine translation, or *real* speech recognition, or *real Q&A. Sorry, Geoff. The main thing discouraging the development of *real* NLP is the failure of linguists, not engineers. Linguists are trying to understand language, but they’re just not there yet.

Pullum and Huddleston compiled a comprehensive grammar of the English language. Does Pullum believe that work is sufficient to construct a computational grammar of English? One that would allow for question answering of the sort he yearns for? The results would surely be peppered with at least as many howlers as Google translate. If his own comprehensive grammar of English is insufficient for NLP, then what does he expect engineers to use to build *real* NLP?

It’s not that I don’t like the idea of rule-based NLP. I bloody love it. But Pullum acts like it doesn’t exist, when in fact, it does. Lingo Grammar is a good example. But even that project is not commercially viable.

One annoying side point worth repeating: Pullum repeatedly leads his reader towards a false conclusion: that Google is representative of NLP. Yes, Google is heavily invested in statistical machine translation, but there exist syntax-based translation tools that use tree structures, dependencies, known constructions, and yes even semantics. Pullum fails to tell his readers about this. In fact, most contemporary MT systems tend to be hybrids, combining some rule-based approaches with statistical approaches.

In Pullum's defense (sort of), I like big re-thinks (MIT tried a big AI re-think, though it's not clear what has come of it). But Pullum hasn't engaged in big-re-thinking. He makes zero proposals. Zero.

One bit of fisking I will add:
Machine translation is the unclimbed Everest of computational linguistics. It calls for syntactic and semantic analysis of the source language, mapping source-language meanings to target-language meanings, and generating acceptable output from the latter. If computational linguists could do all those things, they could hang up the “mission accomplished” banner.
How does translation work in the brain, Geoff? It’s not so clear exactly how bilinguals perform syntactic and semantic analysis of the source language, map source-language meanings to target-language meanings, and generate acceptable output. Contemporary psycholinguistics cannot state with a high degree of certainty whether or not bilinguals store words in their two languages together or separately, let alone explicate the path Geoff sketches out. Even if it is true that bilinguals translate the way Pullum suggests, it is also true that linguists cannot currently provide a viable blueprint of this process such that engineers could use it to build a *real* NLP machine translation system.

And that's what I have to say about that.

1 comment:

Anonymous said...

I think the reason why Pullum was so angry is this comment below the article that was linked:

It has long been noted by speech recognition researchers that increases in performance over the years have mostly been due to increased computing power, rather than better understanding of linguistics. I believe it was Fred Jelinek, the head of speech recognition research at IBM, who noted that every time a linguistic-based module was replaced by a statistical module, performance increased. He was quoted as saying, "Every time I fire a linguist, the performance of the recognizer goes up."

Like you mentioned, that HPSG stuff doesn't seem to be as well-funded as anything the big-tech people are doing.