Wednesday, October 31, 2007

The Semantics of Sex

What is the meaning of the construction ‘to have sex with X’? I ask because Scott Adams of Dilbert fame linked to this story on his friggin hilarious Dilbert blog: “Man who had sex with bike in court”. The article explains the following:

"A man has been placed on the sex offenders’ register after being caught trying to have sex with a bicycle…The accused was holding the bike and moving his hips back and forth as if to simulate sex." (emphasis mine)

There are so many delicious linguistic oddities here that it’s hard to know where to start. First, he was caught “trying” to have sex with the bike. Apparently, sex.with.a.bike’(x) is an accomplishment predicate. What are the criteria for the successful completion of the task of having sex with a bike? Whatever they are, the accused failed to complete the task, at least according to his accusers (the bike has yet to issue a formal statement).

Second, it seems to me that ‘to have sex with X’ is ambiguous between

a) ‘to have sex together with X
b) ‘to achieve sexual gratification from X

These are two different events, but the ‘have sex with’ construction gets used to mean both. The obvious features of animacy and reciprocation are important criteria to distinguish the two semantic meanings. I’m willing to stipulate that one can ‘have sex with’ a sex toy or sex doll (and so are the producers of Lars and the Real Girl). But, the narrowly construed semantics of (a) require animacy and reciprocation of both participants.

Finally, ‘as if’ serves a curios discourse function in the above quote that I can’t quite express just yet because the second sentence is nearly synonymous without it. It seems to express a certain hesitation to admit the proposition. The proposition ‘X was simulating sex with a bike’ is so preposterous, that one does not really want to commit to its veracity. I think ‘as if’ is acting like an evidential of some sort. It’s kinda like ‘I’m just saying…’:

For example (imaginary quote): “Look dude, I’m not saying I know for sure what this guy was thinking. I’m just saying, when I walked in, the guy’s pants were off, his hips were gyrating, and the bike wasn’t complaining, as far as I could tell.. I mean, I’m just saying…”

Tuesday, October 30, 2007

Language Philosophy & Legal Interpretation

Randy Barnett over at The Volokh Conspiracy references Lawrence Solum, a law professor at University of Illinois College of Law, who wrote a lengthy post on constitutional interpretation called "Semantic and Normative Originalism: Comments on Brian Leiter’s “Justifying Originalism.”

I first became interested in linguistics via language philosophy and speech act theory, so I always have a soft spot for debates that involve theories of meaning, as this legal one does (Solum actually references Grice, hehe). It’s a long and complicate post involving legal issues I have no special knowledge of, but I’m interested in teasing apart the Grice reference to see if it has legs, or if it’s yet another example of naïve linguistics gone wrong.

Monday, October 29, 2007

Computational Linguistics vs. NLP

What is the difference between Computational Linguistics and Natural Language Processing? (Hint: There is no official answer to this question).
I had my 476th version of this conversation just now (because we’re in the hiring process for a new “CL lead” and having challenges defining the job) and I made the off-the-cuff claim that it’s the same as the difference between science and engineering. An engineer tries to build things while a scientist is in essence a reverse-engineer, dedicated to trying to figure out how the world works. Human language is a system that already exists, and it works in some way that no one really understands. Linguistics and cognitive scientists have been studying it for decades (well, you could make the claim for millenia). They are now joined by a group of specialists whose skill set involves computer programming and statistics.
Computational linguistics, then, involves trying to figure out how human language works using computational tools (e.g., automated methods of corpus analysis like Tgrep2 [UPDATE 12/02/2010: dead link, for Tgrep2 tutorials, see HERE] and Perl scripting, learning models, etc) while NLP involves building tools that involve language input or output like voice user interfaces, machine translators, entity recognizers, etc. It can be the case that a single person is both a computational linguist and an NLP developer.
That’s my answer, for now… (my previous thoughts are here).

Saturday, October 27, 2007

What I Love About Buffalo

Given my sad post below on Buffalo’s failed renewal, I thought it only fair to make it clear that there are some things about Buffalo that I love, quite dearly.

ART
Albright-Knox Art Gallery
Artvoice

Babik
Buffalo Film Seminars

The Buffalo Film Seminars take place Tuesday nights at 7 p.m. promptly at the Market Arcade Film and Arts Center in downtown Buffalo, the only eight-screen publicly-owned film theater in the United States.

Hallwalls Contemporary Arts Center
Irish Classical Theatre

Located in the heart of Buffalo's thriving Theatre District, Irish Classical Theatre Company (ICTC) is Western New York's premier stage for the greatest works of dramatic literature.

FOOD
Allen Street Hardware Cafe
Betty’s
Bistro Europa

Bill Rapaport's Buffalo Restaurant Guide
La Tee Da
Mother’s

Neighborhoods
Allentown

"In the end it is easier to experience Allentown than to describe it"

Elmwood Neighborhood

Elmwood Village Named One of 10 Great Neighborhoods in America

Hertel Avenue

ACADEMICS
SUNY Buffalo Linguistics Department

FUBARed in Buffalo?

Ouch! Big time Hahvahd Economist Edward L. Glaeser claims the city of Buffalo is screwed with a capital SCREWED! (Hat tip to Mankiw for the link).

Glaeser's argument seems to be that tax dollars are better spent helping the people of Buffalo, not the place. The article walks through, with painful detail, the history of Buffalo's decline as well as the history of Buffalo's many failed renewal projects (the latest being the insane Bass Pro debacle and the Seneca Casino, neither of which are likely to do for the city what renewal advocates want: bring prosperity to the average citizen).

I'm posting this non-linguistics related comment because I came to Buffalo almost 10 years ago to study linguistics (I'll finish that diss someday, hehe) and I had the exact same thought that every one else who comes to Buffalo has: this ciy has "a lot of potential". Well, it's a friggin decade later and everyone here is STILL saying that. It's depressing. At some point, potential must be reached, or else it's sad. And that's where Buffalo is right now; a city which has failed to reach it's potential for decades, and Glaeser has a good read on why.

Now, if Glaeser could figure out why this linguist with a lot of potential is still "working on" that diss. Ich bin Buffalo!

Wednesday, October 24, 2007

Witty Linguistic Chickens

I just ran across this cute article (pdf) by Bonatti et al which unapologetically takes a stand in the great rules vs. statistics debate currently raging within linguistics. It’s a useful follow-up to my previous posts regarding frequency and language. I like the article because it engages in the kind of point-by-point debate that is common in lab meetings (which is often missing in published material); but I also love the wit and sense of humor the authors have. The article starts with a jab at Italian drivers, and ends with a metaphorical playfulness rarely seen (outside of Jackendoff’s work, of course). Here are the first and final paragraphs (but the 2 page article is well worth the read):

With the possible exception of Italian traffic regulations, any rule will generate a statistically detectable advantage for items instantiating the rule. Thus, although attempts to reduce structural phenomena … to statistical computations … have been unsuccessful so far …, it would be no surprise if one or another statistical measure would correlate with the structural phenomena under investigation. But would this mean the statistics caused the apparently rule-abiding behaviors, or are the statistics epiphenomena of underlying structures? Questions about chickens and eggs are always difficult to settle…Thus, although we admire demonstrations of powerful statistical abilities in humans, we remain convinced that it is the linguistic chicken that lays statistical eggs, and not the statistical eggs that hatch into linguistic chickens.

Tuesday, October 23, 2007

An analysis of 'exempt'

I've just started a new blog for a dissertation support group at SUNY Buffalo. This is a copy of a post I put over there. I'm analyzing constructions involving a class of verbs Len Talmy named 'barrier verbs' like ban, prevent, and protect. Here’s one interesting tidbit about a word that is some what barrier-like: by a large margin, the word exempt most often occurs as a predicate adjective in copula constructions (hence, it is POS tagged JJ) as in the BNC example below.

“In certain circumstances, the vehicle will also be exempt from Vehicle Excise Duty Road Tax.”

Code: A0J
Genre: W_misc
Subject: W_nat_science
Medium: m_pub

(TOP (S (PP (IN In) (NP (JJ certain) (NNS circumstances) (, ,))) (NP (DT the) (NN vehicle)) (VP (MD will) (ADVP (RB also)) (VP (VB be) (ADJP (JJ exempt) (PP (IN from) (NP (NN Vehicle) (NN Excise) (NN Duty) (NN Road) (NN Tax) (. .))))))))

First Pass Analysis: the word exempt is like open, it can either be a state or an accomplishment, but it is most highly salient as a state.

  1. the door was open
  2. the door was opened (by X)
  1. the organization was exempt
  2. the organization was exempted (by X)
The passive sentences have an accomplishment reading. But these are rare. I think the reason the overwhelming majority of occurrences of the word exempt in the BNC are predicate adjectives is because the outcome state is the salient aspect of the event of exempting. The actor of the exempting event is almost irrelevant (it's typically a law: not animate, not volitional, indirect causer).

Contrast this with a speech act barrier verb like to bar:
  1. A judge barred Britney Spears from seeing her children.
In (1), the actor of the barring event is an animate, volitional, direct causer.

Friday, October 19, 2007

Onna Kotoba -- "Women's Japanese"

Hal Daume over at his natural language processing blog articulates a lament well known to linguists:

At the end of my four years, I was speaking to a frien (who was neither a conversation partner nor a prof) in Japanese and after about three turns of conversation, he says to me (roughly): "you talk like a girl."

As I posted in his comments section, this is a familiar situation. English speaking men (and probably others) often learn a form of Japanese that could be referred to as “women’s Japanese”. The most probable reason for this is the large percentage of female teachers of Japanese. I have no clue what the actual percentage is. If anyone knows, please post me a comment.

I've never studied Japanese, so I don't know the facts, but Wikipedia has a page called Gender differences in spoken Japanese which makes the following claim: “Feminine speech includes the use of specific personal pronouns... omission of the copula da, use of feminine sentence finals such as wa, and the more frequent use of the honorific prefixes o and go.”

I did a quick bit of Googling and offer this as a brief bibliography of articles and research on the subject (with the caveat that I haven’t reviewed any of this and make no claims regarding the veracity of these works):

I sound like what in Japanese? by Matthew Rusling

Manifestations of Gender Distinction in the Japanese Language. by Alexander Schonfeld

Stanford Japanese page. Unknown author.

Gender performance and intonation in a Japanese sentence-final particle yo.ne [PPT]
Yumiko Enyo, University of Hawaii, Manoa

Takarazuka: Sexual Politics and Popular Culture in Modern Japan. by Jennifer Robertson

Ore wa ore dakara ['Because I'm me']: A study of gender and language in the documentary Shinjuku Boys. by Claire Maree

Here are several papers from the 9th International Pragmatics Conference (July 10 - 15, 2005; Riva del Garda, Italy)

The construction of Standard Japanese women's language from 1920's to 1945. by Rumi Washi Nagoya Gakuin University

Constructing Linguistic Femininity in Contemporary Japan: Scholarly and Popular Representations. by Janet S. Shibamoto Smith and Shigeko Okamoto.

"X Experiments"

In an earlier post, I used the term “kitchen experiment” to refer to a brief, rather unscientific attempt at empirical data gathering, the sort of thing one might do in the morning, in the kitchen, while drinking a cup of coffee. At the time, I thought I had picked up the term from the Language Log folks, but I was unable to find the term using their search engine.

Alas! The mystery was solved this morning when I discovered I had mis-remembered the term. Mark Liberman uses the term Breakfast Experiment™ in his latest post. It's not clear to me if he has seriously trademarked the term or not, but to be safe, I'll keep using my "kitchen experiments" variant.

Tuesday, October 16, 2007

Data and Models

Mankiw on Greenspan and macro-economics:

Better monetary policy, he suggests, is more likely to follow from better data than from better models. Relatively little modern macro has been directed at improving data sources. Perhaps that is a mistake.

Methinks this same sentiment could be said of linguistics. However, I am ambivalent. On the one hand, I am trained in a department long dedicated to descriptive linguistics, so I’m frightened by the lack of good description for most of the world’s languages. I believe in supporting field linguists and old fashioned grammar writing tasks. But I’m equally frightened by the lack of good models of language, particularly of language change and evolution. I’m sympathetic to the recent flood of computationally minded engineers into the field of linguistics who have brought fresh approaches (e.g., statistical). Here’s a representative sample of very smart people bringing mathematical/computational modeling into linguistics:

Sandiway Fong -- U. Arizona
Partha Niyogi -- U. Chicago
Josh Tenenbaum -- MIT
Charles Yang -- U. Penn

Huh?

The good folks over at Cognitive Daily are usually pretty sharp about the research they review. But I'm afraid they've managed to make me chortle with a little bit of condescension with today's post "The economic value of gossip." There may or may not be economic value to gossip (the article they reference is about what appears to be a variation on a common game-theoretic experiment economists call the ultimate game, hat tip to Greg Mankiw who posted on a related topic a couple days ago), but they print the following quote from New York Times journalist John Tierney without the slightest hint of jest:

Language, according to the anthropologist Robin Dunbar, evolved because gossip is a more efficient version of the “social grooming” essential for animals to live in groups.

Folks, I freely admit that theories about what caused language to develop in humans are rarely if ever based on more than thoughtful speculation. This is the case simply because there is precious little hard evidence regarding the origins of language. Fine. My acknowledgment of that is now on record. That said, this claim that language evolved BECAUSE OF its gossip function strikes me as a clear case of bullshit. But hey, I could be wrong.

I would point the curious reader to Jackendoff's Foundations of Language as a fair primer on these issues.

Monday, October 15, 2007

Don't Forget Recency Effects Too...

As usual, Mark Liberman, of Language Log fame, has some instructive comments about linguistics, frequency effects, recency effects, and the state of the art in psycholinguistics:

psychological research tells us that there is also a strong recency effect: in all sorts of tasks, words that we've heard or seen recently are processed more quickly. Again, we don't know how the recency effect arises in the brain, nor do we know whether the brain mechanisms underlying the frequency and recency effects are the partly or entirely the same. There is no lack of speculation on these questions, but we honestly just don't know at this point.

Frequency effects in linguistics

For the record, there are known to be a variety of “frequency effects” in language. A brief survey:

Zipf's law: roughly speaking, the most frequent word in a corpus will be about twice as frequent as the second most frequent (i.e., twice as many tokens).

Word recognition: Dahan et al (pdf) :“frequency affects the earliest moments of lexical access”

Sentence processing: Lau et al : Frequency effects “give rise to reaction time differences in sentence processing tasks"

More on Frequency

Yesterday, Sally Thomason at Language Log posted a critique of recently published research regarding frequency and language change (I’ve noted one perhaps trivial relationship between frequency and linguistic structure here). In challenging the claim that ‘frequently used words are resistant to change’, she points out that frequency is NOT an all powerful mechanism. Crucially, she points out the following:

regular sound change is indeed blind to frequency and all other nonphonetic contextual factors. So it is nonsense to say that frequent words resist change unless one qualifies the statement to exclude regular sound change.

The role of frequency in various linguistic processes has become a hot topic in linguistics. As usual, the jury is far from in. A good primer is the collection in Bybee and Hopper’s Frequency and the Emergence of Linguistic Structure.

Finally, Thomason ends her post with a fair point, that is best kept in mind when non-linguist try to “fix” the problems we silly linguist failed to solve:

Failing to learn something about a field one wishes to contribute to is all too likely to lead to reinvention of the wheel at best, and to a garbage in/garbage out problem at worst.

Still a Story ...

Mark Liberman at language Log picks up the Myanmar vs. Burma debate, and notes "Mama is the literary pronunciation of the more colloquial Bama." My thoughts are here.

Sunday, October 7, 2007

Linguistics Wins Something!

Or not. The Ig Nobel prizes were handed out October 4 and the internets is abuzz. The prize winners are being blogged about fast and furiously. In particular, both Andrew Sullivan and Language Log have highlighted the Linguistics winner, a group who proved, and I quote:

rats sometimes cannot tell the difference between a person speaking Japanese backwards and a person speaking Dutch backwards

So the linguistics winner of the Ig Nobel prize gets mention on major blogs. I would be slightly happier if it weren’t for the fact that there is no real Nobel Prize for linguistics. In fact, as far as I know, there isn’t a single major prize for linguistics at all.

Mathematics has the Fields Medal, Economics has a whole slew of prizes. But Wikipedia’s page List of prizes, medals, and awards does not even have a category for linguistics.

Joseph Stiglitz, a (real) Nobel Prize winning economist has made a convincing argument here that prizes are good for stimulating academic research. His point is that prizes are better than patents. I got the link from Greg Mankiw’s blog which presents some counter arguments. However, linguistics traditionally has neither prizes nor patent opportunities. Any wonder my field has spent 40 years mired in failed theories and vague assumptions.

Computational linguistics, whatever that is, has begun to bring some financial opportunities to linguistics, but that has only been in the last 10 years and those opportunities are pretty much restricted to engineers, not linguists.

What is the most effective way to financing and incentivize linguistics research?

Saturday, October 6, 2007

Sunk Skunk Stunk

My my, there are sooooooooo many things wrong with this headline:

Officer uses BB gun to save skunk stuck in jar

Psssst, ignore the facts of the story. As your linguist, I advise you to ignore facts whenever they’re inconvenient.

Friday, October 5, 2007

Do You Think Computationally?

This morning I received an email regarding the National Science Foundation’s new program called "Cyber-Enabled Discovery and Innovation" (CDI). I skimmed the email with little interest until I read this:

CDI aims to create revolutionary science and engineering research outcomes made possible by innovations and advances in "computational thinking", defined as computational concepts, methods, models, algorithms, and tools.

This phrase, I’m guessing, is meant to refer to thinking about computational methods. A colleague of mine has ranted several times about the mis-use of the term “computational” and its morphological variants and it’s because of phrases like this that he rants. Even if we ignore the juicy ambiguity of the phrase above and take it as it’s intended, what exactly does “computational” mean?

Hal Daume wrote this:

The crux of the argument is that if something is not a task that anyone performs naturally, then it's not a task worth computationalizing.

I think he simply means “make a computer do it automatically” or something like that. And I take that to be the most sensible use of the word. But the word seems to get used to mean something else in a lot of cases. To make something computational is often like making something new & improved or extreme. It seems to be a marketing tool. People use it to make their work sound cutting edge and advanced. In other cases, it means using a computer to do what people used to do by hand.

I Googled the word “computational” and these were all on the first page of hits (CL was number 1, hehe):

Computational linguistics
Computational biology
Computational economics
Computational chemistry
Computational geometry

I don’t know if these disciplines have the same relationship to computational that linguistics does, but I can say this: I believe there is really no such thing as computational linguistics. As I have said in the Q & A section of my Companies That Hire Computational Linguists page:

my use of the term “computational linguistics” is a cover term for a loosely related set of skills including but not limited to NLP, NLU, MT, AI, info extraction, speech processing, (takes a breath…) VUI, text mining, document understanding, machine learning, ad nauseum…

Thursday, October 4, 2007

Ripe Tomatos

Well, my doppelgänger Eugene Volokh over at the The Volokh Conspiracy has finally gotten 'round to mentioning the whole Burma/Myanmar controversy. Yet again! We are of like mind. Oooooooh, scary.

Psssst ... I would have posted this comment on Volokh's blog, but I couldn't remember my login name, and they have a scary message over there for people who try to muck with the login process. So be it.

Allies vs. Enemies

More on frequency and meaning. Here are the results of a “kitchen experiment” meant to test weather the relationship type “ally” could be inferred reliably from mere co-occurrences and conjunction words.

Assumption: If two names are conjoined by “and”, they are probably allies, not enemies.

Method: I took four names that have clear ally/enemy relationships and Googled each individually; then I Googled each combination in quotes (switching the names as well). The actual search queries were of the form "WINSTON CHURCHILL and FRANKLIN ROOSEVELT" but I edited them a bit in the table below to make them fit.

Names Alone

Google Hits

Adolf Hitler

2,460,000

benito mussolini

1,440,000

FRANKLIN ROOSEVELT

1,840,000

WINSTON CHURCHILL

2,330,000

Enemies

Google Hits

Adolf Hitler - WINSTON CHURCHILL

2,600

FRANKLIN ROOSEVELT - Adolf Hitler

596

WINSTON CHURCHILL - Adolf Hitler

1,680

WINSTON CHURCHILL - benito mussolini

504

benito mussolini - WINSTON CHURCHILl

7

benito mussolini - FRANKLIN ROOSEVELT

4

FRANKLIN ROOSEVELT - benito mussolini

1

Adolf Hitler - FRANKLIN ROOSEVELT

752

Allies

Google Hits

F. ROOSEVELT - WINSTON CHURCHILL

10,500

WINSTON CHURCHILL - F. ROOSEVELT

817

Adolf Hitler - benito mussolini

14,700

benito mussolini - Adolf Hitler

643

Results:
Allies
15,343
(14,700 + 643) --Adolf Hitler and benito mussolini
11,317
(10,500 + 817) -- FRANKLIN ROOSEVELT + WINSTON CHURCHILL

Enemies
4280
(2,600 + 1,680) -- WINSTON CHURCHILL + Adolf Hitler
1348 (596 + 752) -- FRANKLIN ROOSEVELT + Adolf Hitler
511 (504 + 7) -- WINSTON CHURCHILL+ benito mussolini
5 (4 + 1) -- FRANKLIN ROOSEVELT + benito mussolini

Discussion: The assumption is weakly supported. Roosevelt is conjoined with his ally Churchill more than 4 times as often as his enemy Hitler and more than 2000 times as often as Mussolini. Churchill is conjoined with his ally Roosevelt more than twice as often as he is conjoined with his enemy Hitler and more than 10 times as often as Mussolini.

The Flip-Flop Effect: The most linguistically interesting result is the more than ten-fold increase in hits that the “FRANKLIN ROOSEVELT and WINSTON CHURCHILL” query got over its “WINSTON CHURCHILL and FRANKLIN ROOSEVELT” brethren. An even greater effect is seen with Hitler/Mussolini flip-flop. Why is the Roosevelt-first collocation so much more frequent? My hunch is that there is some salience issue at work. The more salient member of the collocation will tend to be listed first.

Flaws: Surely there are more flaws to this kitchen experiment than can be enumerated easily. But the one obvious flaw that deserves mention is the normalization problem. Deciding which form of each name to use as a search was not trivial. Roosevelt is often referred to by his initials “FDR”, and both Hitler and Mussollini are commonly referred to by last name only. So this was an experiment in term collocation frequency at best, not person reference.

Note: I'm certain that either Mark Liberman or Arnold Zwicky over at Language Log have use the term “kitchen experiment” in their posts before, but a search of that site produced nothing. Hmmm, am I just imagining this term has been used before?

Wednesday, October 3, 2007

Buffalo Learning

So, regarding the post below, I wonder if there are any hypotheses within the language learning/machine learning communities regarding the maximum amount of polysemy a learning algorithm can handle and still succeed?

Buffalo Buffalo Bayes

The (somewhat) famous Buffalo sentence below seems to say something about frequency and meaning, I’m just not sure what:

Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

Since the conditional probability of “buffalo” in the context of “buffalo” is exactly 1 (hey, I ain’t no math genius and I didn’t actually walk through Bayes theorem for this so whaddoo I know; I’m just sayin’, it seems pretty obvious, even to The Lousy Linguist).

Also, there is no conditional probability of any item in the sentence that is not 1; so from where does structure emerge? Perhaps the (obvious) point is that a sentence like this could not be used to learn language. One needs to know the structures first in order to interpret. Regardless of your pet theory of learning this sentence will crash your learner.

There are only two sets of cues that could help: orthographic and prosodic. There are three capitalized words, so that indicates some differentiation, but not enough by itself. A learner would have to have some suprasegmental prosodic information to help identify constituents. But how much would be enough?

Imagine a corpus of English sentences along these polysemic lines (with prosodic phrases annotated). Would prosodic phrase boundaries be enough for a learner to make some fair predictions about syntactic structure?

UPDATE (Nov 16, 2009): It only now occurs to me, years later, that the the very first Buffalo has no preceding context "buffalo". Better late than never??

Tuesday, October 2, 2007

Data, Datum, Dati, Datillium, Datsun

The folks over at Cognitive Daily have blogged about the number properties of the word "data", or rather, they have blogged about the nitpicky prescriptivist grammar complaints that inevitably attend comments on academic paper submissions.

Predictably, the comments sections is filled with people ignoring the main point, and instead making the same prescriptivist claims about the alleged plurality of "data". My 2 cents (in their comments) was simply that the word "data" has evolved into a word like like "deer" or "moose" which can be either singular or plural.

Monday, October 1, 2007

Blomis #4 -- Innateness Again

I posted a challenge recently to The Innateness Hypothesis (aka Universal Grammar) as discussed in Juan Uriagereka's article on language evolution. Mark Liberman over at Language Log makes a similar challenge with far greater detail and authority than I could.

casting a wide net

For the first time, I used sitemeter to view the site hits for this blog. I only set that up a week ago, so the hits are recent only, but the range of locales is surprising. I'm bigger in India than I ever would have imagined. I can guess by some of the locations which of my friends are the likely culprit (Eric, you are spending wayyyyyyy too much time reading this blog). But some of these just have no explanation, other than Bloggers "Next Blog" button.

Here's a list of hit locations (for hits that lasted longer than 0.00 seconds, which were many, unfortunately).

Bombay, India
Brooklyn, NY (USA)
Cambridge, UK
Haifa, Israel
Honolulu, Hawaii (USA)
Hyderabad, India
Kinards, SC (USA)
Krakw, Poland
Leuven, Belgium
Mamers, NC (USA)
Melbourne, Australia
New York, NY (USA)
Pittsburgh, PA (USA)
Saint Paul, MN (USA)
San Diego, California (USA)
Seattle, Washington (USA)
Sunnyvale, CA (USA)
Tokyo, Japan
Tulsa, OK (USA)
Woking, UK

TV Linguistics - Pronouncify.com and the fictional Princeton Linguistics department

 [reposted from 11/20/10] I spent Thursday night on a plane so I missed 30 Rock and the most linguistics oriented sit-com episode since ...