Sunday, September 25, 2016

the value of play for preschool children

A nice article came up in The Atlantic about the comeback of recess and play in elementary schools. My sister posted some thought about how she encourages play when she teaches her preschool kids: For our little Chico preschool, play and education are two sides of the same coin.

Friday, September 23, 2016

from preschool to scholar athlete

One of the most endearing things about being a teacher is seeing former students go on to achieve great things. My sister's preschool student Jack Emanuel has been named a Subway Scholar athlete. Very cool. Congratulations Jack.

Monday, September 5, 2016

some thoughts on data analytics for a micro business

My first thought as a new business owner trying to utilize data analytics is this: how empty this all seems. I don't need to understand my market right now, I need to break into it. How can I use data analytics to open doors? I don't have time for nuance right now, I need sign ups. I'm faced with one of the most basic problems in business: getting noticed. And I wanna do it on the cheap.

Let me stipulate that this post relates only to a small slice of DA proper: low-cost advertising DA available from Google & Facebook.

My sister just relocated her preschool, Kids First, to a new town, Chico CA, and I'm helping her buy a domain, set up a website, and do some promotion using Google AdWords Express, Facebook ads, and some local print advertising.

This is the first time I've been on this side of the data analytics equation: as a consumer of the analysis, not a producer. And my experience is telling.

Business model: We're not a small business. We're a micro business. Employees = 1 (Miss Lori). We don't need to sell a million widgets or gain a million likes. The business model of our preschool remains very old fashioned: get ten kids signed up, average that head count annually, and we're successful.

The market: Chico is a college town with 85,000 people in the city, about 200,000 in the larger area. There is a state university, a junior college, two hospitals, two high schools, two middle schools, and several elementary schools, as well as a robust farming economy (all employing professionals with kids, right?).

Advertising: Bought a small print ad in the local weekly for four weeks ($200/wk). One-time ad in the university student paper for the first week of the semester (hoping to grab the attention of new faculty who might finger through it out of curiosity). Small Facebook ad ($50), small Google AdWords campaign ($120/month). My sister "boosted" the preschool's Facebook page for a few days for $20.

Data Analytics: FB ad = zero calls. FB said 6,322 people saw her boosted virtual tour video and 411 people clicked on it in just a few days (I'm suspicious of these numbers because Lori said she limited the ad to Chico. I would be shocked if that many Chicoans clicked her video in a couple days). Zero calls. I started the AdWords campaign on September 3. I limited the area to the Chico region (those 200k people) because I don't want to pay for clicks from people who are looking for preschools in San Francisco. We've gotten 4 clicks in about 48 hours. Zero calls.



This is early, of course, but still, this all seems so empty because of the nature of our business. We don't need clicks or views or likes: we need parents to sign their kids up. There is a disconnect to me between the old fashioned, brick-and-mortar business reality of trying to profitably run a preschool and the virtual reality of these meager data analytics. The NLPer in me likes the fact that AdWords shows me which search words drive clicks, but the business owner in me says, where's the beef? I need a parent to pay me money. Don't care 'bout no clicks.

I'm not in the business of advertising DA, but I have some appreciation for the tools and techniques. And even I'm frustrated trying to connect the dots. Or more to the point, I don't know how to use data analytics to go from clicks to phone calls (to actual sign ups).

I get that using DA may prove useful in the long run, but first we need old fashioned visibility. Without that, nuance is worthless. I can only imagine how frustrated and perhaps infuriated many small business owners are at this same disconnect between their business models and the services offered. This experience has already deepened my appreciation for the customer experience in the DA ecosystem.

But ultimately what I think is this: Data analytics, schmata analytics. I don't need a data scientist. I need Don Draper.

Monday, August 15, 2016

Yet Another Bad review of Suicide Squad

There are lots of reviews of Suicide Squad detailing how bad it is. This is another one. This is less a movie than a series of loosely related scenes. It had roughly three sections:
A) The set-up: Character introductions. The movie begins with an amateurish method of introducing the characters that is literally one person listing their names and features followed by a flashback scene for each. Dumbest exposition structure ever.    
B) The Mission: Load everybody up in a helicopter, give them weapons, send them along their way. This plays out as an extension of A where each person gets a montage playing with their toys of choice.  
C) Switcheroo: Plot twist that changes the mission objective. Unfortunately, the movie fails to adequately set up this core plot point because the director spent so much time showing off weapons and Margot Robbie's arse, he forgot to have a character explain the mission in any memorable way. When the twist in the mission objective is "revealed" midway into the siege, it's more confusing than revelation. At this point, no sane person is looking for coherence in the film anyway, so it hardly matters.
The directorial style can best be summed up as 'just keep everyone shooting guns, no one will notice the incoherence'.

Monday, July 11, 2016

Fun with Stanford's online demo

I've long been a fan of Stanford's online parser demo, but now they've outdone themselves with a demo page for their CoreNLP tools. Not only does it take your text and show the parse and entities, it also lets you develop a regex to capture your input text, including semantic regexes!

 This is just plain fun: http://corenlp.run/ 



Sunday, June 19, 2016

IBM Watson at NAACL 2016

There were several Twitter NLP flare-ups recently triggered by the contrast between academic NLP and industry NLP. I'm not going to re-litigate those arguments, but I will note that one IBM Watson question answering team anticipated this very thing in their current NAACL paper for the NAACL HLT 2016 Workshop on Human-Computer Question Answering.

The paper is titled Watson Discovery Advisor: Question-answering in an industrial setting.

The Abstract
This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.
The paper's topic is not surprising given that four of the authors are PhDs (Charley, Graham, Allen, and Kristen). Hence, it was largely a group of fishes out of water: they had an academic bent, but are daily wrestling with the real-word challenges of paying-customers and very messy data.

Here are five take-aways:

  1. Real-world questions and answers are far more ambiguous and domain-specific than academic training sets.
  2. Domain tuning involves far more than just retraining ML models.
  3. Useful error analysis requires deep dives into specific QA failures (as opposed to broad statistical generalizations).
  4. Defining what counts as an error is itself embedded in the context of the customer's needs and the domain data. What counts as an error to one customer may be acceptable to another.
  5. Quiz-Bowl evaluations are highly constrained, special-cases of general QA, a point I made in 2014 here (pats self on back). Their lesson's learned are of little value to the industry QA world (for now, at least).

I do hope you will read the brief paper in full (as well as the other excellent papers in the workshop).

Monday, January 25, 2016

Genetic Mutation, Thick Data, and Human Intuition

There are two stories trending heavily in my social network sites that are seemingly unrelated, yet they share one obvious conclusion: the value of human intuition in finding needles in big data haystacks. Reading them highlighted to me the special role humans must still can play in the emerging 21st century world of big data.

In the first story, The Patient Who Diagnosed Her Own Genetic Mutation—and an Olympic Athlete's, a woman with muscular dystrophy sees a photo of an Olympic sprinter’s bulging muscles and thinks to herself, “she has the same condition I do.” What in the world would cause her to think that? There is no pattern in the data that would suggest this. The story is accompanied by a startling picture of two women who, at first glance, look nothing alike. But once guided by the needle in the haystack that this woman saw, a similarity is illuminated and eventually a connection is made between two medically disparate facts that, once combined, opened a new path of inquiry into muscle growth and dystrophy that is now a productive area of research. Mind you, no new chemical compound was discovered. No new technique or method that allowed scientists to see something that couldn’t be seen before was built. Nope. Nothing *new* came into being, but rather a connection was found between two things that all the world’s experts never saw before. One epiphany by a human being looking for a needle in a haystack. And she found it.

In the second story, Why Big Data Needs Thick Data, an anthropologist working closely to understand the user stories of just 100 Motorola cases discovers a pattern that Motorola’s own big data efforts missed. How? Because his case-study approach emphasized context. Money quote:
For Big Data to be analyzable, it must use normalizing, standardizing, defining, clustering, all processes that strips the the data set of context, meaning, and stories. Thick Data can rescue Big Data from the context-loss that comes with the processes of making it usable.
Traditional machine learning techniques are designed to find large patterns in big data, but those same techniques fail to address the needle in the haystack problem. This is where humans and intuition truly stands apart. Both of these articles are well worth reading in the context of discovering the gaps in current data analysis techniques that humans must fill.

UPDATE: Here's a third story making a similar point. a human being using an automatically culled dictionary noticed a misogynist tendency in the examples it provided. A rabid feminist writes

And here's a fourth: Algorithms Need Managers, Too. Money quote: "Google’s hard goal of maximizing clicks on ads had led to a situation in which its algorithms, refined through feedback over time, were in effect defaming people with certain kinds of names."

Sunday, January 10, 2016

Advice for linguistics grad students entering industry

At the LSA mixer yesterday I had the chance to chat with a dozen or so grad students in linguistics who were interested non-academic jobs. Here I'll note some of the recurring themes and advice I gave.

The First Job
Advice: Be on the look-out and know what a good opportunity looks like.

Most students were very interested in the jump. How do you make that first transition from academics to industry? In general, you need to be in the market, actively looking, actively promoting yourself as a candidate. For me, it was a random posting on The Linguist List that caught my eye. In the summer of 2004 I was a bored ABD grad student. I knew I wasn't going to be competitive for academic jobs at that point, so I checked The Linguist List job board daily. One day I saw a posting from a small consulting company. They were looking for a linguist to help them create translation complexity metrics. They listed every sub-genre in linguists as their requirements. This told me they really didn't know what they wanted. I saw that as an opportunity because I could sweep in and help them understand what they needed. I applied and after several phone calls I was asked to create a proposal for their customer. I had a conference call to discuss the proposal (I was in shorts and  a t-shirt in an empty lab during the call, but they didn't know that). Long story short, I got the job*, moved to DC and spent about two years working as a consultant on that and other government contracts. That first job was a big step in moving into industry. I had very impressive clients, a skill set that was rare in the market, and a well defined deliverable that I could point to as a success.


Visibility
Advice: Make recruiters come to you. Maintain a robust LinkedIn profile and be active on the site on a weekly basis (so that recruiters will find you).

Several students wondered if LinkedIn was considered legitimate. I believe it's fair to say that within the tech and NLP world, LinkedIn is very much legit. My LinkedIn profile has been crucial to being recruited for multiple jobs, two of which I accepted. Algorithms are constantly searching this site for all kinds of jobs. In fact, most of the really good jobs for linguists are not posted on job sites, but rather are filled only by recruiter. So you need strategies for waving your flag and getting them to come to you. In the DC area, there are excellent opportunities for linguists at DARPA, CASL, IARPA, NIST, MITRE and RAND, and many other FFRDCs (federally funded research and development centers), but they rarely post these to jobs boards. You need them to find you. A good LinkedIn page is a great way to increase your visibility.

Another way to increase your visibility is to go public with your projects. You can always blog descriptions and analysis. For computer science students, a GitHub account is virtually a requirement. I think linguists should follow their lead. You most likely write little scripts anyway. Maybe an R script to do some analysis, or a Python script to extract some data. Put those up on GitHub with a little README document. That's an easy place for tech companies to see your work. Also, if you have created data sets that you can freely distribute, put those up on GitHub too. I also recommend competing in a Kaggle competition. Kaggle sponsors many machine learning competitions. They provide data, set the requirements, and post results. It's a great way to both practice a little NLP and data science, and also increase your visibility (and put your Kaggle competitions on your resume!). here are two linguistically intriguing Kaggle competitions ready for you right now: Hillary Clinton's Emails (think about the many things you could analyze in those!); NIPS 2015 Papers (how can a linguist characterize a bunch of machine learning papers?).

Have you managed to automate a process that you once did manually (either through an R script, or maybe Excel formulas), write that up on a blog post. Automating manual processes is huge in industry.  You know the messy nature of language data better than anyone else, so write some blog posts describing the kind of messiness you see and what you do about it. That's gold.


Resume
Advice: List tools and data sets. Do you use Praat? List that. Do you use the Buckeye Corpus? List that. Make it clear that you have experience with tools and data management. Those are two areas where tech companies always have work to perform, so make it clear that you can perform that work.



*FYI, here's what the deal was with that first consultant job: The FBI tests lots of people as potential translators. So, for example, they will give a native speaker of Vietnamese several passages of Vietnamese writing, one that is simple, one that is medium complex, and one that is complex); then the applicant is asked to translate the passages into English. the FBI grades each translation. The problem was that the FBI didn't have a standardized metric for what counted as a complex passage in Vietnamese (or the many many other languages that they hire translators for). They relied on experienced translators to recommend passages from work they had done in the past. Turns out, that was a lousy way to find example passages. The actual complexity of passages was wildly uneven, and there was no consistency across languages.

Thursday, January 7, 2016

LSA 2016 Evening Recomendations

With the LSA's annual convention officially underway, I've thrown together a list of a few restaurants and bars within a short walking distance of the convention center that grad students and attendees might want to enjoy. My walking estimates assume you are standing in front of the convention center.

Busboys and Poets (4 blocks west at 5th & K) - A DC Institution. You will not be forgiven if you do not make at least one pilgrimage here.

Maddy’s Taproom (4 blocks east at 13th & L) - Good beer selection.

RFD Washington (4 blocks south at 7th & H) - Large bottled beer selection, good draft beer selection (food ain't that great).

Churchkey (6 blocks northeast at 14th & Rhode Island) - Officially, one of the best beer rooms in the US.

Stan's Restaurant (7 blocks east at L & Vermont) - Downstairs, casual. very strong drinks. Supposedly good wings (I'm a vegetarian, so I hold no opinion)

Daikaya - Ramen - Izakaya (7 blocks Southwest at 6th & G) - Upstairs bar can be easier to get into sometimes. It's a popular place.

Teaism, Penn Quarter (8 blocks south at 8th & G) - Great snack place mid-way to the national Mall. Large downstairs dining area. great place to have some tea, a snack, and catch up on conference planning.

There are, of course, lots of other places within a short walk. I recommend 14th street in general. 9th street has some good stuff, especially as you get closer to U, but it's a little sketchy of a walk.