Let us start with the definitions of artificial and human intelligence.
Artificial intelligence [AI] is the intelligence of machines or software and is a branch of computer science that studies and develops intelligent machines and software. The field is also defined as “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions that maximises its chances of success.
Human intelligence is the property of mind that encompasses the capacities to reason, plan, problem solve, think, comprehend ideas, use languages, communicate and learn.
These two definitions [Wikipedia] demonstrate just how far we are away from true AI.
Our brain is a remarkable organ – a ‘computer’ with about 100 billion neurons [nerve cells] and an equal number of neuroglia which serve to support the neurons. Each neuron may be connected to 10,000 other neurons, passing signals via 1,000 trillion synaptic connections equivalent to a computer with a 1 trillion bit per second processer. Estimates of the human brain’s memory range from 1 to 1000 terabytes. The 19 million volumes in the US Library of Congress represent about 10 terabytes.
With this amazing computing power we should all be geniuses but human brains are remarkably inefficient in key ways – our memories are lousy, our grasp of logic is shallow and our capacity to do arithmetic is dismal. However, in some ways we outstrip our silicon based computers which are so good at maths, logic and memory. Your average ten-year-old, for example, can learn to play any number of games and well, if not quite at a world-class level. How do human beings manage to be so flexible, and what would it take to make a machine equally supple in learning new things?
Today’s’ computers are in every aspect of our life and they are very complex and fast – multi core processors operating at 6 Gb/sec with 1-2 terabyte hard drives and 8Gb RAM memory are typical of high end consumer specification. But they are designed [programmed by humans] to carry out specific tasks which they do exceptionally well but you cannot ask a computer to do something for which it has not been programmed.
A good example is Deep Blue the chess computer designed by IBM in the 90’s which defeated Garry Kasparov, the world champion, on May 11th 1997 over a 6 game-match. But this brilliantly complex computer only plays chess and cannot play any other games or learn by itself to carry out other tasks.
And this is the fundamental requirement of the AI computer – to learn and respond to changing circumstances and make the best decisions for success. There is a lot of research and development at present but we have a long way to go for true AI – the level that could control a spaceship on a 20 year journey to the stars. We are probably at least 50 years away from this criteria and certainly 150 years away from using it on a star mission.