tag:blogger.com,1999:blog-520807396714463309.post1464813011336415617..comments2024-02-12T02:22:30.561-05:00Comments on The Lousy Linguist: Genetic Mutation, Thick Data, and Human IntuitionChrishttp://www.blogger.com/profile/09558846279006287148noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-520807396714463309.post-77885101275273314942016-02-05T02:56:02.930-05:002016-02-05T02:56:02.930-05:00Thanks for the response Chris. I agree on the issu...Thanks for the response Chris. I agree on the issue of 'deep learning'. But perhaps we should take cognitive acts such as image categorization to be the operational units of cognition. One of the things linguistics has never dealt with is the issue of word recognition and frame triggering. What sort of process is taking place when you see a word and recognize all the complex frames, schemas and scenarios that are associated with it. How do they interact when words are put together in a phrase or clause? How does that meaning come together? The reason, I think, is that we are thinking of all these acts as fundamental units. Which is probably the right way to go about it. However, when it came to modelling these 'units' computationally, we just contented ourselves with simple string matching and database look up. (And before computers, formal logical models did the same thing in different ways.) <br /><br />But that is the wrong way to model this. Or rather models based around this approach are not modelling language but a caricature of language. Human analysts of language have the advantage of already being in possession of this 'black box' faculty of instant recognition and categorization (albeit subject to errors and inconsistencies) but computers need a way to replicate this. So perhaps starting with things like deep learning image categorization is the right place. Because word categorization is more like image recognition than like the sort serial processes AI has dealt with so far. It is also possible that what has been done so far will not generalize or scale up. But without it, AI can't get anywhere near to what has been claimed for it.Dominik Lukešhttps://www.blogger.com/profile/03071876778771965740noreply@blogger.comtag:blogger.com,1999:blog-520807396714463309.post-59631800218939788462016-02-04T21:48:57.395-05:002016-02-04T21:48:57.395-05:00Dominik, sorry for the late reply, but I think you...Dominik, sorry for the late reply, but I think you make very thoughtful comments and I wanted to wait until I had time to respond as thoughtfully. Unfortunately, time keeps skipping away. You make a very good point that both of these stories cherry pick an anecdote where humans happened to do well where computers didn't, but surely there are many other examples to the contrary. Yes, I agree. I was taken by the allegory of the stories. And a big YES that the *story* of AI has been mis-told as a story of humans-as-computers. Intelligence is not strictly algorithmic, I suspect, but I cannot prove that. It is worth noting that the major success stories of contemporary AI involve so-called *deep-learning*, which is simply a way of saying that the successful algorithms that humans are using to solve problems (like image categorization) work in ways so opaque that even the smartest people using them can't explain them. The magic of hidden layers solves everything. Presto!<br /><br /><i>Sorry, if this feels a bit ranty. I'm not sure I'm expressing myself properly here.</i>Chrishttps://www.blogger.com/profile/09558846279006287148noreply@blogger.comtag:blogger.com,1999:blog-520807396714463309.post-79639506734892941862016-01-26T15:29:17.890-05:002016-01-26T15:29:17.890-05:00I'm not sure that these are the best examples ...I'm not sure that these are the best examples to give. They take the 'human' ability and make it seem somehow computer-like but at tasks computers can't perform or are bad at. But the thing is computers have two options for solving problems: algorithm or statistics. In both cases, choosing the type of solution is a human task. Monitoring the output is as well. <br /><br />But humans are just as bad at all these tasks as they are good. There are no 'unit tests' for judgement - it is always contextual and determined by a lot of background knowledge not available to an algorithm. For every woman who saw a pattern and recognized something important in it, there are a million who saw a pattern that was made up out of randomness. <br /><br />The problem is that we even have to have this conversation to start with. Stuff like feelings that some sort of human turf being invaded when a guy looses at chess to a supercomputer when the surprise should be he does not loose to a calculator. I'm always reminded of the checklist manifesto in this context. It's when medical practitioners use checklists to follow procedures rather than relying on their judgement for routine behaviors and use judgment for non-routines. Computers are just glorified checklists. What is human is making judgements whether to use the checklists - and very often making the wrong call.<br /><br />What worries me that all these 'AI', big data people are buying their own hype - but the source of it seems to be in their belief that human cognition ('intelligence') is somehow algorithmic but only in a flawed way. But it should have been obvious how wrong this is with the burst of the first AI bubble.<br /><br />Sorry, if this feels a bit ranty. I'm not sure I'm expressing myself properly here.Dominik Lukešhttps://www.blogger.com/profile/03071876778771965740noreply@blogger.com