Archive for the ‘Intellectual ramblings’ Category

Re-thinking Singularity

October 31, 2009

Hey, another post! I seem to be writing about 1/year. Oh well, this pace about suits me.

This is mostly a reaction to the IEEE special report on the Singularity (http://spectrum.ieee.org/static/singularity) which came out summer of 2008 (I think?). I did not see it then – only stumbled upon it now, linked from, of all things, a political blog. I’m going to start with a brief summary of what the Singularity (a.k.a. the Rapture of the Geeks) is, then my one-paragraph impression of the report, and then the thoughts it provoked.

The Singularity has been described in many different ways, but primarily, it is understood to be the event (or series of events) that lead to the construction of machines that are as intelligent as humans. Since, by definition, these machines would be able to construct themselves (we, the humans, were able to construct them, after all), and to upgrade themselves (much as I can upgrade a computer by inserting a new video card – gross oversimplification, but I will use this example for now), the human-smart-machines will be able to create smarter-than-human-machines. Current advances in engineering and computer science suggest that this upgrade process will happen very quickly, certainly on sub-human-lifetime scales. We will then experience an explosion of progress, etc. as super-smart, super-efficient machines work out cold fusion, grant everyone (nearly) unlimited free energy, vastly improve human longevity via pharma advances, and so on – assuming that said machines will actually want to do that. An oft-quoted aspect of the singularity argument is that, given the ability to create human-intelligent machines, we would also be able to store our own human brains within these machines and, as such, become immortal (provided nobody does sudo rm -rf / on our machine hard drive, which, of course, will run Ubuntu 40, codename Zzzataxous Zzzorn).

The special report addresses the concept of singularity, and invites great thinkers, proponents and debunkers alike, to weigh in. There’s a lot of criticism, ranging from the specific (engineering and physics do not, in fact, suggest that computers will self-upgrade at very quick rates) to the very general (we won’t be able to construct human-intelligent machines because we lack Some Fundamental Understanding of how the human brain works). Overall, this page: http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity suggests that most “Tech Luminaries” invited to give their opinions in the issue are very skeptical about singularity. There are many other fine articles to check out in the special report, with relevant discussions of human consciousness and how it may be reproducible in a machine, which I encourage my audience to read, but it is this general skepticism that I wish to address below.

I respect the opinions of the scholars, industry leaders, etc. invited to this round table, but I am somewhat surprised at their responses. For example, take Prof. Pinker’s statement that “There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible.”(same link as above) While true in principle, this statement is an example of a common mis-framing of the singularity problem: singularity is a fictional advancement in technology that may happen tomorrow, or never, and has little basis in reality. Pinker goes on to say that “Sheer processing power is not a pixie dust that magically solves all of your problems.”

Stephen Pinker is not the only person who thinks like that. Garry Kasparov, famous chess champion, once said something along the lines of “a machine can never play at Chess Master level.” (paraphrase) A few years later, Mr. Kasparov had to rephrase that statement to say “a machine can never play at Chess Grand Master level.” Then, “a machine can play at a Grand Master level, but not at the level of the top players in the world.” A few years after that, Mr. Kasparov, World Chess Champion was defeated by a machine armed but with nothing but “sheer processing power,” an artifact that couldn’t think or write plays or pass the simplest Turing test. But boy it could count.

This story is not meant to illustrate that Pinker and Kasparov are wrong, and that we merely need to get to exa- or zeta- or yotta-FLOP capability to get to human consciousness. There’s another side to Kasparov’s story, the side told by the computer scientists and engineers who worked on Deep Blue, the computer that beat the chess champion. This side is usually pretty boring to tell – it’s full of exciting technical details like “and then we realized we could rearchitecture the machine to more efficiently do chess board computation!” but it’s the side that won, by Not Making Assumptions. That is the problem with the singularity-is-a-fiction-and-pixie-dust-processing-power-won’t-change that argument. It makes assumptions that those who work on singularity are working on fiction. They are not. They are working on very real artifacts, such as Deep Blue, and solving very real problems.

The field of machine learning attests to the quiet work of thousands of these real-problem solvers. They have taken what I think is the essential part of cognition – logical conclusions not as the end states of a rule chain, but as the result of statistical analysis of noisy data – and started applying it to problems. Before long, machines started doing simple things, like comparing documents to each other (a critical component of search and plagiarism detection). Image recognition. Text translation. Genetic sequence alignment. These days, machine learning powers the world’s greatest search engines and text analysis packages. Is it perfect? No! But neither are we, humans (something that many critics of the search for singularity tend to forget). What’s important, to use a business term, is that machine learning is actionable. Just as the computers of old helped governments calculate missile trajectories and hack codes, the computers of today help corporations make a buck. These advances represent a significant step towards replicating human intelligence, and they are not fiction. They are part of life, right now.

I foresee two criticisms of my bold hurrah for machine learners as forerunners of intelligent machines: one, that I am missing the forest for the trees, and two, that I have been talking about intelligence, not cognition. I will address each separately.

The first criticism is also known as the “blind men and elephant” story that is often used to represent the current state of cognitive science research. The story talks about how a bunch of blind men want to describe an elephant by touch. One feels the elephant’s feet and states that the creature is some heavy armored beast, the second feels its trunk and compares it to a giraffe, the third feels its tail and says that the elephant must be small and hairy. None is right, and the society of blind men is no closer to understanding elephants than when they started. Similarly, all the scientists working on different parts of the brain, and studying them in different ways – from the neurophysiologist looking at axon structure to the philosopher considering the Chinese Room thought experiment – are blind, missing the larger picture for their focus on the details. This, however, is a gross misrepresentation of the scientific process. By studying crucial details, one understands more about the whole. Physicists spent centuries studying everything from gravity to electromagnetism before unified theories began to emerge. True, it is possible we are still far off from any unified theories about the brain, but we are definitely NOT moving in the wrong direction.

I will be more ambitious and say that we are, in fact, moving in the right direction, and pretty quickly. We know a lot about neurons, the basic building blocks of the brain. We understand that neurons and bunches of neurons can do computation via a simple version of statistical analysis (of the electric signals coming in). We know that the brain is hierarchical in structure (in that neurons combine into assemblies) but not completely so (in that there are a number of independent, large-scale parts of the brain that do not all have one master, and in fact act in unison). These insights tell us a lot, already, about how to design intelligent systems. We may not even need a Grand Unified Theory of Human Cognition: it may boil down to a set of emergent properties of a lot of very large counting systems working together. We know how to build counting systems, and how to hook them together. We don’t need to get them all right at once: we can proceed in incremental steps, starting with simple reflexes (already possible) and moving towards self-sufficiency, not complexity. Complex structures will arise, IF they are necessary, along the way.

I have gotten distracted from my goal in ending this essay: to address the second criticism, namely, that I am talking about intelligence, not consciousness. I will end on another speculation, but one that I will argue has nothing to do with fiction. Consciousness is not ineffable, as some argue. It is not A Great Mystery of the brain. It is something of a story, that we tell ourselves to make sense of our world – including us.

Current research on the brain has shown the incredible importance of external stimuli to the brain’s functionality. It’s true, we can be conscious with very little external input. Imagine, however, never having gotten external input. Not knowing what trees, or the sky, or anything looked like, or sounded like, or felt like. Not just having a different set of stimuli – actually NOT having any stimuli. I would argue that in this state, you would not be conscious. Our brain’s most complex parts serve the function of interpreting signals from the outside. Its original evolutionary function was not to do solipsistic thought exercises, but to get away from that Sabretooth tiger and to kill that mammoth, to make us better predators. It is highly likely, then, that consciousness is linked to the interpretation of external stimuli.

We also know that a lot of what our brain does is statistical processing – finding patterns in noisy data. From very low-level signals (are we looking at a horizontal or a vertical line?) the brain constructs ever more complex patterns that resolve into familiar objects. But the processing does not end there – it wouldn’t be enough for our brain to simply recognize a sabretooth tiger, it also needs to tell us which direction to run in, where to hide, and, eventually, how to construct the trap that will bring this predator down. At this point, we are still looking at patterns, but they are much more complex, and deserve to be called models. There is the predator-prey model, that says “run because it’s going to eat you” and governs all the complex work of running and not being eaten. But we, as humans, have evolved beyond that model. We have learned to construct models of far greater detail, that posit the predator and the prey as limited beings, each with its own set of strengths and weaknesses. The advantage of these models is that we can develop far more complex reactions that are independent of the evolutionary fight-or-flight response. We can set traps and build tools. We can distinguish between friend and foe (a relatively simple model) but also cooperate with friends to take down a large foe (a more complex, but not qualitatively different, model). Of course, eventually the number of models gets pretty large and we need some even larger, meta-model, to make sense of them all. That is where consciousness came in – not as an epiphany or as an enlightenment, but as a slowly, painfully evolving sense of Story, of a great Play called Life where everything is interrelated. We started out with grunts and squiggles on cave walls. We moved to line art and verbal myths, where the fierce predator and the hunter were personified, along with thunder and lightning. And so on, reaching for ever higher levels of complexity that changed and grew the stories and added concepts like love and faith and science.

I was not entirely honest when I started this essay. Singularity is not a fiction, but is ABOUT fiction. Achieving singularity is about teaching a computer to tell stories, much like we taught our children around the campfires of decamilennia ago. Consciousness is the never-ending thread of narrative we make up in order to not lose consciousness – in order for all the smorgasbord of Things coming at us to not overwhelm us. And when another system, whether silicon-based or not, is able to tell such a story about itself and everything that it feels, whether via electric sockets or nerve impulses or something else, it becomes conscious. That is what we have to look forward to – when it comes, I don’t know.