Friday, July 8, 2011

"the definition of “metaphoricity” is problematic in itself"

One of the metaphor recognition papers I read this week had an interesting finding wrt inter-annotator agreement and metaphor: The Automatic Identification of Conceptual Metaphors in Hungarian Texts: A Corpus-based Analysis (Babarczy et a., LREC 2010 Workshop).

The purpose of the paper was to run a sort-of bake-off between three methods of creating source/target word lists (to be used by selection preference metaphor recognition system): Three different methods of compiling the word lists were tested: a) word association experiment, b) dictionary of synonyms, and c) reference corpus.

Ultimately they found that their corpus based method was most successful as measured by recall/precision, but there was a more striking result rather buried in the paper that I feel deserves more analysis. They created a gold standard by hand-tagging a 30,000 word "baseline" corpus. Here's what they found:

At the first attempt, inter-annotator agreement was only 17%. After refining the annotation instructions, we made a second attempt, which resulted in an agreement level of 48%, which is still a strikingly low value. These results indicate that the definition of “metaphoricity” is problematic in itself [emphasis added].

They reported three general sources of inter-annotator DISagreement:
  • Direct vs. Indirect Reference: For example, in the case of the conceptual metaphors ANGER IS HEAT or CONFLICT IS FIRE, the source domain should be an expression referring to a sort of “heated thing”. However, in some cases, one or the other annotator included words indirectly suggesting the presence of heat, such as kiolt ('extinguish'), kihől ( 'get cold') etc.
  • Lexical Ambiguity: For example, the expression eljutottam a mai napig ('I've gotten to this day') may or may not represent a CHANGE IS MOTION metaphor depending on whether the Hungarian verb jut (literally: get somewhere, reach a place by moving the entire body) is taken only to denote physical movement or to be ambiguous.
  • Discrepancies in Classification: ...it is difficult to make an informed decision on whether the following example contains a CHANGE IS MOTION or a PROGRESS IS MOTION FORWARD metaphor, neither of which appear to be an intuitively correct choice: a járvány végigsöpört szülıvárosukon ('the epidemic swept through their hometown').

Of the four or five articles I've reviewed on automatic metaphor identification, this is the only one which reported on the results of human-tagging a corpus for metaphor. This strikes me as the sort of thing that should be a first step for anyone seriously interested in this program (certainly anyone interested in the IARPA Metaphor Program).I don't doubt that others have done this, but it seems to be under-reported, suggesting it is not be treated as a core part of the problem.

I've complained in my previous posts that there is an overly restricted definition of metaphor underlying contemporary approaches to auto identification, but even within a highly restricted definition like those used by Babarczy et al. and others, there appears to be problems at the heart of identification for humans. So what exactly is being identified?

ResearchBlogging.org
Anna Babarczy, Ildikó Bencze M., István Fekete, & Eszter Simon (2010). The Automatic Identification of Conceptual Metaphors in Hungarian Texts: A Corpus-Based Analysis LREC 2010 Workshop. Proceedings

Thursday, July 7, 2011

more on auto metaphor recognition methods

A quick follow-up to my previous post on automatic metaphor recognition wrt the IARPA Metaphor Program. The paper Automatic Metaphor Recognition Based on Semantic Relation Patterns by Tang et al. challenges the dominant selectional preferences method by substituing their own Semantic Relations Patterns. They point out the problems with Selection Preferences (unfortunately I don't think they solved the problems with their own method, more on that in a bit).

Again I'll give the Ling 101, computational linguistics for dummies version (as I understand it ...): Selection Preferences assumes that words frequently co-occur with other words that are literally associated with the same semantic domain. For example,
  1. That ship has sailed the mighty ocean.
  2. That boat has sailed across lake Erie.
  3. That captain has sailed many seas.
In these three sentences, the verb sailed occurs with three different subjects (ship, boat, captain) and three different objects (ocean, lake, seas), but all of them evoke the SAILING domain. So a computer could use this info to create a model of the verb sail that would match up the semantics of its expected subjects and objects, then compare them to a new sentence. If the computer encountered the new sentence

    4. That student sailed through final exams.

It could automatically use the model created from sentences 1-3 above to recognize that the verb sailed occurs with a subject and object not from the SAILING domain, but rather from the STUDENT domain. Then it could use a metaphor mapping component to recognize that HUMANS as MACHINES is an acceptable mapping and thus recognize that #4 might be coherent under a metaphorical interpretation.

Tang et al. rightly point out that matching frequency-based selectional preferences is not the same thing as literal meaning. First, they note that some times, a metaphorical pairing is actually MORE FREQUENT than a litertal pairing. They use some Chinese examples, but I think the English translation makes the point. Take the following two uses of close:
  • The plane is close to the tower.
  • Opinion are close.
In their corpus, Chinese uses like 'opinions are close' were more frequent, even though this is a non-literal use of close. Frequency would lead the Selectional Preference method to believe that the opinions-type use is literal simply because it is more frequent. This outcome is predicted by Lakoff & Johnson, btw, because one of the core tenants of their seminal work on metaphors was that metaphors are NOT special uses of language, but rather quite common and normal.

Tang et al.'s solution is a new method they call Semantic Relation Patterns. Their explanation is brief and highly technical, making it a slog to get through, but it hinges on incorporating an existing semantic relations knowledge base, HowNet, and adding a probabalistic model. Note, I had trouble getting the HowNet website to load, but here is a PDF explanation.

How Net is an on-line common-sense knowledge base unveiling inter-conceptual relations and inter-attribute relations of concepts as connoting in Chinese and English bilingual lexicons.

In my quick read the two methods differed only minimally in the crucial ways (namely, they are both lexalist and local). Semantic Relation patterns are still based on lexical semantics and still derived entirely locally. I don't see how SRP would handle this metaphor from my earlier post any better than SP:

Imagine a situation in a biology class where two students, Alger and Miriam, were originally going to be partners for a lab assignment. Then they got into an argument. A third student, Annette, asks Miriam:
  • Annette: Are you still going to be lab partners with Alger?
  • Miriam: No. That ship has sailed.
In this scenario, the sentence "That ship has sailed" is entirely coherent and literal from a selectional preferences perspective (i.e., ships really do sail). Yet it is clearly being used metaphorically (there is literally no ship). Here, the metaphor is only detectable if we link two sentences together via co-reference. The phrase "the ship" does not co-refer to a real ship in the discourse. Rather, it refers to the possible event of be-lab-partners-with-Alger. Unless we can link phrases between sentences and between types (i.e., allowing an NP to co-refer to an event), then we are not going to get a computer to recognize these types of metaphors (which I suspect are quite common).

I appreciate Tang et al.'s critique of the SP method and their attempt to get beyond it, but I think their methodology fails to make the critical improvements to automatic metaphor recognition that will be crucial to creating a full scale tool that handles real world metaphor.


ResearchBlogging.org
Xuri Tang, Weiguang Qu, Xiaohe Chen, & Shiwen Yu (2010). Automatic Metaphor Recognition Based on Semantic Relation Patterns International Conference on Asian Language Processing

Tuesday, July 5, 2011

the big picture: automatic metaphor identification

The recently popularized IARPA Metaphor Program piqued my curiosity, so I've been reviewing a variety of articles on contemporary approaches to automatic metaphor identification. I've read three articles so far and one thing is somewhat dissapointing: they all severely restrict the notion of metaphor to mean local metaphors within single sentences.

They all pay considerable lip service to Lakoff & Johnson's seminal 1980 work Metaphors We Live By, taking as gospel the notion that metaphor is defined as a mapping from one conceptual domain to another. But their examples are all of a limited type. Here are three representative examples from the papers I've been reading:
  • Achilles was a lion. (Babarczy et al.)
  • The sky is sad. (Tang et al.)
  • I attacked his arguments (Baumer)
What struck me is the methods used to identify metaphor are remarkably lexalist. The dominant strategy is Selectional Preferences whereby a list of source and target conceptual domains is created. Then from each, a list of words typically associated with that domain is culled from corpora or intuition or dictionaries. Then, each word is given a set of selectional preferences which constrain what kinds of subjects or predicates it typically occurs with.

Here is my Ling 101 version of this methodology: If I understand correctly (and I may not), for Tang et al.'s example "The sky is sad", we would have a concept like THE ENVIRONMENT IS HUMAN. We would have a list of words typically associated with the environment (e.g., "sky") and a list of words typically associated with being human (for example "sad"). A computer could then recognize the following:
  1. The subject (the sky) is associated with the environment.
  2. The predicate (sad) is associated with humans.
  3. This subject (the sky) is not typical for this predicate (sad).
  4. This sentence is incoherent on first analysis.
  5. The concept THE ENVIRONMENT IS HUMAN links these non-typical phrases coherently.
  6. This sentence is only coherent using conceptual mapping, therefore it is probably metaphorical.
This is a gross oversimplification, but I think it gets the big picture about right.

At first blush, I'm impressed with the simplicity and elegance of this solution. However, it seems to me that much metaphorical language is not local like this (local here = within a single sentence). For example, imagine a situation in a biology class where two students, Alger and Miriam, were originally going to be partners for a lab assignment. Then they got into an argument. A third student, Annette, asks Miriam:
  • Annette: Are you still going to be lab partners with Alger?
  • Miriam: No. That ship has sailed.
In this scenario, the sentence "That ship has sailed" is entirely coherent from a selectional preferences perspective (i.e., ships really do sail). Yet it is clearly being used metaphorically (there is literally no ship). Here, the metaphor is only detectable if we link two sentences together via co-reference. The phrase "the ship" does not co-refer to a real ship in the discourse. Rather, it refers to the possible event of be-lab-partners-with-Alger. Unless we can link phrases between sentences and between types (i.e., allowing an NP to co-refer to an event), then we are not going to get a computer to recognize these types of metaphors (which I suspect are quite common).


ResearchBlogging.org
Xuri Tang, Weiguang Qu, Xiaohe Chen, & Shiwen Yu (2010). Automatic Metaphor Recognition Based on Semantic Relation Patterns International Conference on Asian Language Processing


Other citations:
The Automatic Identification of Conceptual Metaphors in Hungarian Texts: A
Corpus-Based Analysis. Anna Babarczy, Ildikó Bencze M.1, István Fekete1, Eszter Simon1

Computational Metaphor Identification to Foster Critical Thinking and Creativity. ERIC BAUMER (dissertation). 2009.

Friday, July 1, 2011

the largest whorfian study EVER! (and why it matters)

Let me take the ball Mark Liberman threw on Monday and run with it a bit. Liberman posted a thorough discussion of Fausey and Broditsky's neo-Whorfian English and Spanish speakers remember causal agents differently. Specifically, he invited readers to carefully examine the methodology of the experiments themselves, and not just focus on the conclusions. It turns out that a few years ago another set of neo-Whorfians, Jürgen Bohnemeyer and company, published a paper that addressed similar methodological concerns:

Ways to go: Methodological considerations in Whorfian studies on motion events. (With S. Eisenbeiss and B. Narasimhan) Colchester: University of Essex, Department of Language and Linguistics (Essex Research Reports in Linguistics 50: 1-19). 2006.

This paper addressed experiments involving motion events like rolling and falling whereas Fausey and Broditsky's work addressed agentivity like breaking and popping, but there's enough overlap to warrant some comparison, particularly since the Bohnemeyer et al. paper specifically addresses methodology wrt Whorfian experiments.

But before I get into the details, let me state clearly why I think this is important. In other posts, I have dismissed popular lingo-topics like language evolution as outside the mainstream of linguistics because they don't bear directly on what I consider to be the center of the linguistics universe: How the brain does language. But linguistic relativity (aka, The Whorfian hypothesis) is one of the great questions of linguistics and cognitive science precisely because it bears directly on the question of how the brain does language. And we're only just now developing the proper tools and methodologies to study the question with scientific rigor. It may turn out that language does not affect other cognitive processes or the effect is minor. I don't care. I just want to know one way or the other. And it's work like Bohnemeyer's and Broditsky's that will lead us to knowing, eventually.

Now the fun stuff.

TV Linguistics - Pronouncify.com and the fictional Princeton Linguistics department

 [reposted from 11/20/10] I spent Thursday night on a plane so I missed 30 Rock and the most linguistics oriented sit-com episode since ...