Because we're not all interested in being R experts. By far, the single most frustrating part of my own graduate linguistics experience was the fact that in order to study the kinds of linguistic phenomena I wanted to, I had to spend most of my time learning tools that I didn't actually care about, like Tgrep2, Perl, Python*, R, etc. As a linguist, I don't really give a damn about any of those things. They were all obstacles in my way. The more time I spent learning tools, the less interested in linguistics I became. I respect the hell out of engineers who build great tools that are valuable to linguists, but if those tools are not user friendly, I might as well scream into the darkness.
Which is why I am impressed with The Stanford Visualization Group's recent Visualization Tool for Cleaning Up Data:
Another thing I often hear is that a large fraction of the time spent by analysts -- some say the majority of time -- involves data preparation and cleaning: transforming formats, rearranging nesting structures, removing outliers, and so on. (If you think this is easy, you've never had a stack of ad hoc Excel spreadsheets to load into a stat package or database!).
Yes, more help please.
*Mad props to the NLTK!