Tuesday, January 25, 2011

Obama's State Of The Union and word frequency

In anticipation of President Obama's 2011 State Of The Union speech tonight, and the inevitable bullshit word frequency analysis to follow, I am re-posting my post from last year's SOTU reaction, in hope that maybe, just maybe, some political pundit might be slightly less stupid than they were last year ... sigh .. here's to hope ...

(cropped image from Huffington Post)

It has long been a grand temptation to use simple word frequency* counts to judge a person's mental state. Like Freudian Slips, there is an assumption that this will give us a glimpse into what a person "really" believes and feels, deep inside. This trend came and went within linguistics when digital corpora were first being compiled and analyzed several decades ago. Linguists quickly realized that this was, in fact, a bogus methodology when they discovered that many (most) claims or hypotheses based solely on a person's simple word frequency data were easily refuted upon deeper inspection. Nonetheless, the message of the weakness of this technique never quite reached the outside world and word counts continue to be cited, even by reputable people, as a window into the mind of an individual. Geoff Nunberg recently railed against the practice here: The I's Dont Have It.

The latest victim of this scam is one of the blogging world's most respected statisticians, Nate Silver who performed a word frequency experiment on a variety of U.S. presidential State Of The Union speeches going back to 1962 HERE. I have a lot of respect for Silver, but I believe he's off the mark on this one. Silver leads into his analysis talking about his own pleasant surprise at the fact that the speech demonstrated "an awareness of the difficult situation in which the President now finds himself." Then, he justifies his linguistic analysis by stating that "subjective evaluations of Presidential speeches are notoriously useless. So let's instead attempt something a bit more rigorous, which is a word frequency analysis..." He explains his methodology this way:

To investigate, we'll compare the President's speech to the State of the Union addresses delivered by each president since John F. Kennedy in 1962 in advance of their respective midterm elections. We'll also look at the address that Obama delivered -- not technically a State of the Union -- to the Congress in February, 2009. I've highlighted a total of about 70 buzzwords from these speeches, which are broken down into six categories. The numbers you see below reflect the number of times that each President used term in his State of the Union address.

The comparisons and analysis he reports are bogus and at least as "subjective" as his original intuition. Here's why:
  1. We don't know what causes word frequencies.
  2. We don't know what the effects of word frequencies are.
  3. His sample is skewed.
  4. Silver invented categories that have no cognitive reality.
  5. There are good alternatives.

We don't know what causes word frequencies.
Why does a person use one word more than another? WE. DON'T. KNOW. I understand the simple intuition that this should mean something, but no one actually knows what it means. We simply don't understand the workings of the brain well enough to study the speech production system well enough to answer this question (despite these guys' suspect claims). So we are left with pure intuition (which is generally bad in the cognitive sciences because we don't think the way we think we do). So, again, this methodology is not "objective" as Silver claims (not the simplistic way he implemented it, anyway).

We don't know what the effects of word frequencies are.
The correlate to #1: When a person hears another person use one word more than another, what effect does it have? WE. DON'T. KNOW. Same reasons as above. This remains the realm of intuition and guesswork.

His sample is skewed.
While I understand that to the lay person, the set of SOTU speeches seems like a coherent category to analyze, it is in fact a linguistically incoherent grouping because these sorts of speeches are constructed slowly, painfully, over time, by teams of individuals, NOT spoken extemporaneously by a single individual. Silver could spin this as a positive in the sense that the speeches represent presidential administrations as a whole, but this makes the "evidence" (i.e., word frequency) extremely messy. What factor is driving the frequency of a particular word in a speech? No clue. The variables are numerous and unknown (two bad things for "rigorous" analysis). Having such a messy data set makes interpretation nearly impossible even if we DID know the answers to #1 and #2 (which we don't).

Silver invented categories that have no cognitive reality.
Silver's 70 buzzwords are shoved into six arbitrary categories. Linguists have bee keen on word categories for ... well ... let's say at least 2500 years. This we care about. Deeply. William Labov famously wrote, "If linguistics can be said to be any one thing it is the study of categories" (full text here). More recently, in the last few decades, linguists have expanded their repertoire of tools for analyzing lexical categories using psycholinguistic, cognitive linguistic, and computational linguistic tools and methods. None of these were employed by Silver in determining whether or not his six categories have any coherence or cognitive reality. He just made them up. How is this MORE objective than intuition?

There are alternatives.
Let me be clear. I am a fan of corpus linguistics. Counting words is good (as Nunberg says, and as many linguists say. We like this). But this is just the beginning of  a long road of analysis. It must be done in a systematic and sophisticated way to be of any use. There are numerous software tools and methodologies that Silver could have made use of that would have given him a more nuanced analysis. There are whole books that teach people how to do this, such as Corpora in Cognitive Linguistics (just one of many).

Again, I have a lot of respect for Silver and his advanced skill set in stats. I would love to see Silver bring the full weight of his skills to bear on linguist analysis (as I've said, every linguist should study math and stats), but this experiment falls far short of the mark and he should know better.

To a certain extent, this critique is unfair to Silver because he implicitly seemed to be acknowledging many of these deficits. All he wanted to do was get a more objective picture of what the SOTU speech meant and how it fits into a bigger picture. On the other hand, it's a fair critique because he put in a lot of effort and posted the results to his popular and influential blog (yes, I note my blog is neither); one ought not to waste such effort. There is the glaringly negative possibility that his popularity and influence as a statistician will actually serve to further strengthen the popular but wrong notion that simple word counts are somehow meaningful. This would be bad.


*By "simple word frequency counts" I mean counting the words a person uses (say, in a speech) without counting anything else or adding any other data to give the frequency counts meaning and context.

1 comment:

Jorge Enrique Ruiz López said...

Nice analysis. Unfourtunately we read or meet people with this simplistic view of the language, not to count the simplistic statistical tool-use.
Can you give me any advise with a good methodology for analyzing text with also a good statistical approach?
Thanks in advance.
Greetings.

TV Linguistics - Pronouncify.com and the fictional Princeton Linguistics department

 [reposted from 11/20/10] I spent Thursday night on a plane so I missed 30 Rock and the most linguistics oriented sit-com episode since ...