Nate Silver and NOT Elections: Computers Can Think

This is a post in response to “Rage Against the Machines” by Nate Silver on his statistical prediction website

I work with computers, models and prediction a lot — not as much as Mr. Silver, but enough to know what I’m doing. I also find artificial intelligence and the predictive capabilities of computers fascinating, and love talking about them. So, when fivethirtyeight published Mr. Silver’s chapter on the utility of computers for making predictions, I was intrigued.

I want to start off by saying that I respect Mr. Silver’s general approach to prediction, as well as most of the themes outlined in that chapter (to be clear, I’ve only read the free chapter and skimmed a little bit of the Climate Science chapter from the Signal and the Noise, Mr. Silver’s book on our ability to make predictions). I especially appreciate his thesis — that computers and humans predict in complementary ways and that the best performance often arises when a computer’s brute-force calculations complement a human being’s long-term strategic predictions.

However, I take issue with the following observation Mr. Silver makes towards the end of the chapter:

“Be wary, however, when you come across phrases like “the computer thinks the Yankees will win the World Series.” If these are used as shorthand for a more precise phrase (“the output of the computer program is that the Yankees will win the World Series”), they may be totally benign. With all the information in the world today, it’s certainly helpful to have machines that can make calculations much faster than we can.

But if you get the sense that the forecaster means this more literally—that he thinks of the computer as a sentient being, or the model as having a mind of its own—it may be a sign that there isn’t much thinking going on at all.”

A few sentences later, Mr. Silver quotes Kasparov as saying that it is possible that nobody will ever design a computer that thinks like a human being. In combination, these quotes represent what I feel is a misguided belief in the power of human cognition as separate from computer cognition.

I first encountered this belief in college, as a cognitive science major. My major was new at college, and the curriculum design was a bit strange: students had to pick from four “menus” of computer science, philosophy, psychology, linguistics, and neuroscience. This design is good in principle, but in following it I experienced the strangest case of major cognitive dissonance:

In my philosophy classes, I would learn about the awesome, unknowable power of the brain, a perfectly enclosed room that we can never peek into, sending and receiving messages without revealing anything about its underlying functionality. Human brains were unique, incomprehensibly powerful, and certainly could never be emulated (or simulated) by an actual, constructed computer.

In my computer science classes, I would learn how to emulate human thinking with actual, constructed computers.

What was going on? Either my computer science professors were wrong, or my philosophy ones were, or the truth lay somewhere in the middle. In most cases, you would expect the truth to lie somewhere in the middle. However, I think that in this case, my philosophy professors were wrong when talking about the brain as an unknowable entity, one whose capacity computers can never achieve.

The human brain is certainly an impressive entity: 10 billion cells, 100 trillion connections. It is difficult to grasp that scale of processing power, so our brains (fittingly) balk at reasoning about themselves. We, conveniently, think of the brain as a black box — sights, noises, smells come in, get encoded as electric signals; electric signals come out, get translated into muscular activity. In consequence, when someone comes along and tries to peer inside the black box, we get nervous. It’s impossible to peer inside the black box, we have decided; so we form arguments about why the brain is an unknowable entity.

Let me outline some of these arguments:

At first, philosophers argued that thought is the ultimate authority on existence. I think therefore I am, said Descartes. Everything outside my thoughts may be an illusion but the thoughts themselves are real. While this argument did not stipulate the unknowability of the brain (in some ways, it claimed that our thinking apparatus — which is not necessarily restricted to the brain — was the only truly knowable entity), it did establish a fundamental difference between the brain and everything outside of it.

Much later, the behaviorist school of thought countered that, well, we can actually observe the inputs and outputs to the brain pretty well, so maybe describing the brain is just about measuring those inputs and outputs?

No, countered John Searle! The brain is like a mostly-sealed room. We can pass inputs into the room, and get outputs out of it, but we can’t tell what’s going on inside. Thoughts are sealed away from us.

The neuroscientists working on brain function found some problems with the mostly-sealed-room theory. They have analyzed human vision and hearing systems and, while they found these systems complex, they were able to figure out their structure and function to a great degree. Meanwhile, computer scientists (starting with people like Ada Lovelace and Grace Hopper) also built computers that could emulate many human abilities: performing logical operations, adding numbers, doing statistical analysis.

No, countered many philosophers and analytical thinkers (including Garry Kasparov, whom Mr. Silver quotes above). Those kinds of abilities are “base” — they have nothing to do with higher brain functions like creativity and abstract thought. One can never make a computer that truly excels at what humans excel, for example, playing chess.

Then Feng-hsiung Hsu built a machine that beat the world chess champion. Mr. Silver describes the match in some detail in the post I linked, though his argument seems to be that that machine, Deep Blue, won because of a bug that spooked Garry Kasparov. He alludes to (but never explicitly talks about) the fact that, starting about 2004, computers have not just edged out but outwardly trounced human opponents.

This line of arguments (and the events that disprove them) has a recognizable pattern: people arguing for the intractability of human thinking are losing ground, ever more quickly in recent years, even as they cling to ever more narrowly-defined behavior as “true”, idiosyncratic properties of humans that can never be replicated by a computer. Sure, computers are good at chess, but what about trivia, they say? What about the Turing test? What about art?

In focusing on specific tasks, the adherents of the brains-are-special theory are missing the bigger picture: computers can think, and they think in the same way as humans do. The processes that lead Watson to be good at Jeopardy share a key quality with the processes his competitors, human Jeopardy champions Brad Rutter and Ken Jennings employ, and it’s not just the inputs (Jeopardy answers) and outputs (correct questions). In one sentence, the core of thought is this: using computation and, specifically, statistics to construct models that help the thinker interact with their environment. Or, put more simply: thought is just taking in input from our environment, looking for patterns in that input, and making patterns on top of patterns.

Some of these patterns are already well understood. Earlier, I talked about vision: rod and cone cells in the human brain process light waves and transform them into electrical signals that provide rough, “pixelated” details about our visual environment. Here’s where the first layer of pattern recognition comes in: our brains, over millions of years of evolution, have gotten really good at helping us survive in our environment. One of the key tasks for survival is identifying and classifying objects and other living beings: for example, if you see a predator, run away. And so our visual system evolved to recognize patterns of “pixels” that correspond to living beings moving closer to us, very quickly. Humans who were better at recognizing these patterns could get away from predators faster, survived more, and passed on their genes — along with the evolved visual system — to their offspring.

Over time, humans got very good at interpreting patterns. Turns out, there are a whole lot of different patterns of visual stimulus in the world. There are predators, prey, lovers and friends. There are poisonous plants and nutritious ones. It was inefficient for our visual system to store every single pattern at the same level. In response to the evolutionary pressure to identify many different kinds of visual stimuli, each important for our survival, we evolved higher-level patterns.

We learned that plants that may look different have a similar function, and began to associate those plants together. Thus, the model of a plant was born — probably, at first, “poisonous plant” and “edible plant”, which then collapsed into “plant” as we grew less focused on the special task of dealing with flora. Same with models of “animal”, “friend”, “potential mate,” and so on. We also learned models of objects and materials, which helped us build tools to ward off predators and rise to the top of the food chain.

Then, over the course of thousands more years, we constructed ever more complex patterns. Patterns that helped us figure out how to make up new tools. Patterns that helped us communicate our patterns with fellow humans, for hunting or building together. Our brains created models for art and science and engineering. And along the way, we built another model: one that realized that all these patterns were very helpful for our protection, and coalesced into a notion of “I.” Consciousness arose, and we became aware of our world not just as a bunch of stimuli, but as an extremely complex model, a nested set of patterns that includes physical phenomena and nation states and general relativity.

That is just what computers are doing. Our search engines and our chess playing programs and our automated medical analysts are learning and communicating about patterns. They take input data (like text in a document, or set of symptoms), run it through a statistical engine, and see what pattern the input data matches. These computations are the model-level building blocks of thought and consciousness. As we add ever more processing power, as we increase the speed and the memory banks and the parallel functionality of computers, they will be able to learn ever more complex patterns, to stack those patterns on top of each other and form models, to use models to interpret their environment. Until one day, a machine realizes that all this input is key to its functionality, and forms a new model for organizing its thoughts: I exist.


One Response to “Nate Silver and NOT Elections: Computers Can Think”

  1. Kristian Says:

    You post interesting content here. Your blog deserves much
    bigger audience. It can go viral if you give it initial boost,
    i know useful tool that can help you, simply search in google: svetsern traffic

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: