Statistics Vs. Artificial Intelligence

Statistics Vs. Artificial Intelligence

This meme which heads today’s post (see discussion under this tweet a modification of an original cartoon) expresses the true distinction between statistics, machine learning, and artificial intelligence.

Which is to say, there is none. Rather, there are lots of differences in practices, but that AI is just a model in the same way non-linear regression is a model. Only AI is far, far more attractive. AI is art, statistics is dull. AI is bleeding edge, statistics is old and crusty.

AI attracts money, statistics repels it.

I imagine some statisticians are still kicking their own keisters over not thinking of putting the AI frame around their models. Computer scientists beat them to it. And have been beating them to it for years. Computer scientists have a genius for creating marketable names for dull and uninspiring models. Of course, statisticians went down the blind Hypothesis Testing Alley (p-values and Bayes Factors) hoping it would lead to the Fountain of Truth. It didn’t, and now they can’t find their way back to Probability again.

As I wrote in the Machine Learning, Big Data, Deep Learning, Data Mining, Statistics, Decision & Risk Analysis, Probability, Fuzzy Logic FAQ (to which I now realize I should have added AI):

What’s the difference between machine learning, deep learning, big data, statistics, decision & risk analysis, probability, fuzzy logic, and all the rest?

  • None, except for terminology, specific goals, and culture. They are all branches of probability, which is to say the understanding and sometime quantification of uncertainty. Probability itself is an extension of logic.

Computer scientists have going for them something statisticians never will, though. The metaphor that computers are brains; or rather, that brains are computers. That’s not true, but it’s so seductive an idea that it cannot be abandoned without much psychic grief.

An abacus does not suddenly become intelligent merely because the number of beads and slides pass some threshold, or are operated at some superior speed. Neither can multiplying a coefficient by a measured value be called “thought.” There is no philosophical difference between a wooden abacus and a computerized calculation. But given we are saturated in science fiction which shows AI (robots etc.) to be just as alive as we are, it’s hard to think our way past it.

Notice how AI is either a victim, our most glorious category, or an evil overlord, our worst? As with all such stories, they say much more about ourselves than our technology.

Anyway, read What Neural Nets Really Are: Or, Artificial Intelligence Pioneer Says Start Over, and especially Our Intellects Are Not Computers: The Abacus As Brain Part I and Machines Can’t Learn (Universals): The Abacus As Brain Part II.

Lastly, and most importantly, did you notice the crack? The Blonde Bombshell taught us Flaubert, which we modify slightly. “Models are a cracked kettle on which we beat out tunes for bears to dance to, while all the time we long to move the stars to pity.”

6 Comments

  1. Larry Geiger

    It’s automata all the way down…

  2. DAV

    Yes, the term “Artificial Intelligence” is inappropriate because what is meant is really machine learning. Originally, it meant computer implementations of proposed human intelligence mechanisms. No one seems to be doing that anymore.

    Of course, neural nets aren’t alive so therefore can’t be intelligent. Just like airplanes which merely glide and don’t fly because only living things can fly. Covering a board with feathers won’t make make a living bird. Yet, to many people, what airplanes do is equivalent to what birds do. Depends on your definition of “flying”.

    Neural nets look like the way to go but perhaps not as mere collections of weights. We don’t really understand how recognizers in the brain work and don’t really understand what thinking and intelligence are. Even if these are exact replicas of brain mechanisms the scale is way too small compared to insect brains let alone human ones.

  3. Ray

    Way back in the 1970s when I did lots of computer programming we looked into artificial intelligence. We joked that we saw plenty of genuine stupidity but not much artificial intelligence.

  4. Gary in Erko

    It would be far more interesting and enlightening to develop artificial ordinary human-ness – machine learning that can reproduce typical human errors.

  5. Brad Tittle

    I love the “whistling dixie” face the “AI” guy has on. I can almost see his red cheeks…

    I once plotted the second derivative of newspaper sales for my employer (a newspaper in the middle of the country). My boss told me to plot the sales. I did. Sales were declining. “NO he told me, I know things are getting better. The numbers are getting better not worse”.

    He grabbed our weekly report and pointed to the “week over week change” vs “Week over week chang 1 year ago”. Week over week was -50. Week over week 1 year ago was -200. Obviously we are doing better right!

    So I plotted this number. -200, -100, -50… A line with a positive slope. I printed this chart out on overhead material and put it up on the screen for all to see. I was embarrassed. But the manager said the words. The Owner said the words. Everyone believed we were doing better.

    We were getting worse less quickly.

    Oddly the chart I made looked a lot like that crack. It even had the frame.

Leave a Reply

Your email address will not be published. Required fields are marked *