Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Review:The Age of Spiritual Machines

JonKatz posted more than 15 years ago | from the -Be-Very-Afraid dept.

Technology 260

Remember Hal in Stanley Kubrick's "2001"? He was a wuss compared to the deep thinking digital machines Ray Kurzweil suggests are heading our way over the next century. Forget the debate over human versus artificial intelligence. "The Age of Spiritual Machines" suggests that we and our computers are about to become one, evolving together in a whole new bio-digital species. By 2020, computers will be as smart as we are. By 2099, there will no longer be any clear distinction between humans and computers. Is this techno-hype or prescient futurism?

In l990, inventor Ray Kurzweil predicted in "The Age of Intelligent Machines," that the Internet would proliferate rapidly and that the defeat of a human chess champion by a computer was imminent.

He was right on both counts, so it's worth paying attention to his new book, "The Age of Spiritual Machines," (Viking, $25.95). This round, Kurzweil is making even more radical predictions - namely, that computing will develop so rapidly over the next century that technology and human beings will literally merge in socially, educationally, biological, even spiritual ways.

Kurzweil has ratcheted up the human-versus-artificial intelligence debate a few notches. There will, he makes clear, be no human intelligence versus artificial intelligence. We and our computers will become one.

This theory picks up where Moore's Law leaves off. Gordon Moore, one of the inventors of the integrated circuit and former chairman of Intel, announced in l965 that the surface area of a transistor - as etched on an integrated circuit - was being reduced by approximately 50 per cent every twelve months. In l975, he revised the rate to 24 months. Still, the result is that every two years, you can pack twice as many transistors on an integrated circuit, doubling the number of components on a chip as well as its speed.

Since the cost of an integrated circuit has stayed relatively constant, the implication is that every other year brings twice as much circuitry running at twice the speed for the same price. This observation, known as Moore's Law on Integrated Circuits, has been driving the acceleration of computing for decades.

The most advanced computers are still much simpler than the human brain, currently about a million times simpler. But computers are now doubling in speed every twelve months. This trend will continue , Kurzweil predicts, with computers achieving the memory capacity and computing speed of the human brain by approximately the year 2020.

This is a stunning idea. Human evolution is seen by scientists as a billion-year drama that led to its greatest creation: human intelligence. Computers will get to the same point in less than a hundred years. It's time - past time, actually - to start asking where they will go from here.

Kurzweil doesn't argue that next year's computers will automatically match the flexibility and subtlety of human intelligence. What he predicts is the rapid rise of what he calls the software of intelligence. Scanning a human brain will be achievable early in the next century, and one future approach to computing will be to copy the brain's neural circuitry in a "neural" computer designed to simulate a massive number of human neurons.

"There is a plethora of credible scenarios for achieving human-level intelligence in a machine," writes Kurzweil. "We will be able to evolve and train a system combining massively parallel neural nets with other paradigms to understand language and model knowledge, including the ability to read and understand written documents."

Kurzweil's own law of accelerating growth and return is centered on the idea that this new bio-digital species becomes increasingly learned and sophisticated, life will become more orderly and efficient, while technological development continues to accelerate.

Kurzweil's premise - that computers will become as smart as we are and then merge their intelligence with ours -- is not only challenging and provocative; it also makes sense. But he isn't as clear or coherent when it comes to divining just what kind of intelligence computers will have - how intuitive they can be, how individualistic or ethical.

By the second decade of the next century, there will be reports of computers passing the Turing Intelligence test, says Kurzweil. The rights of machine intelligence will become a public policy issue. But machine intelligence will still largely be the product of collaborations between humans and machines, computers still programmed to maintain a subservient relationship to the species that created them. But not for long.

Where his book and his vision stumble is in grasping what will happen to us when computers become smarter than we are, then sensual, social or spiritual. Will we better off? Will the computer be moral? Will it have a social or other consciousness? Do we wish to merge with computers into one species? Will we have any choice? We could be heading for a sci-fi nightmare or, alternatively, for another of those utopian visions that used to pepper Wired magazine before it became the property of Conde Nast.

While futurists can measure or plot the computational skills of tomorrow's computers, can anyone really know the precise nature of that intelligence, and whether or not it can replicate the functions of the human brain?

The idea of our being outsmarted, thus dominated and endangered by computers, has been portrayed as a nightmare in Stanley Kurbrick's "2001" (Kubrick apparently greatly underestimated the virtual person Hal would become). It's also surfaced in various rosy intergalactic Disney-like visions in which machines perform labor, clean the air, heal humans, teach kids. Kurzweil doesn't say which notion, if either, sounds more plausible.

The latter half of the book becomes essentially a time-line: Kurzweil somberly walks us through the evolution of computing intelligence, and the eventual merging of digital technology and human beings into a new species.

By 2009, Kurzweil predicts, human musicians will routinely jam with cybernet musicians. Bioengineered treatments will have greatly reduced the mortality from cancer and heart disease. But human opposition to advancing technology will also be growing, an expanding neo-Luddite movement.

By 2019, nonetheless, Kurzweil predicts that computers will be largely invisible, embedded in walls, furniture, clothing and bodies - sort of like the artwork in Bill Gates' massive new mansion. People will use three-dimensional displays built into their glasses, "direct eye" displays that create highly realistic, virtual visual environments that overlay real environments. Paraplegics will routinely walk and climb stairs through a combination of computer-controlled nerve stimulation and exoskeletal robotic devices. This display technology projects images directly onto the human retina, exceeds the resolution of human vision, and will be widely used regardless of visual impairment.

In 2009, human opposition to advancing technology will be growing - as in the spread of the Neo-Luddite movement.

By 2019, there will be almost no human employment in production, agriculture, or transportation, yet basic life needs will be met for the vast majority of the human race. A $1,000 computing device will approximate the computational ability of the human brain that year, and a year later, the same amount of money will buy the computing capacity of about 1,000 human brains.

By the year 2099, a strong trend towards the merger of human thinking with the world of machine intelligence that humans created will be underway. There will no longer by any clear distinction between humans and computers. Most conscious entities will not have a permanent physical presence. Life expectancy will no longer be a viable term in relation to intelligent beings.

Small wonder Kurzweill expects a growing discussion about the legal rights of computers and what constitutes being "human." Direct neural pathways will have been perfected for high-bandwidth connection to the human brain. A range of neural implants will be available to enhance visual and auditory perception, machine-generated literature and multi-media material.

"The Age of Spiritual machines" surpasses most futuristic predictions of sci-fi writers and technologists. Scientists and programmers may not be the best judge of the nature of artificial digital intelligence. Some input from biologists and neurologists might have been useful. Sometimes, Kurzweil's predictions read like the numbingly familiar, gee-whiz techno-hype that infect mass media discussions of the Internet.

Yet Kurzweil is someone to be taken seriously. No nutty academic or cyber-guru, MIT named him inventor of the year in 1988; and he received the Dickson Price, Carnegie Mellon top science award, in 1994.

Caution is still in order. Kurzweil's earlier predictions about the Net and chess were short term, thus much more cautious and feasible. Only the new bio-digital species will know if these visions turned out to be right.

Predictions about the future of technology have a checkered history, itself a cautionary tale for futurists. Walt Disney was convinced we'd be whizzing back and forth to Saturn on weekends by now. We were surely supposed to be controlling the earth's climate rather than worrying about holes in the ozone layer. And whatever happened to cancer cures and hover cars?

But it's hard to find any parallel with the history of computing. The growth of digital machines suggests the future of computers be taken more, not less, seriously. "The Age of Spiritual Machines" is a wake-up call. It reminds us that the relation between human beings and the remarkable machines they've invented is almost sure to change radically. Perhaps it's time to start thinking seriously about how.

Buy this book here.

You can e-mail me at jonkatz@slashdot.org.

cancel ×

260 comments

Techno-hype. (0)

Anonymous Coward | more than 15 years ago | (#2032102)

Sounds like something straight out of William Gibson. He didn't know much about computers either.

People already hate this idea, and it isn't even possible yet. If computers became smarter than us, I'd consider it much more likeley that the 80% computer illiterate would take Louisville sluggers to the "brainy" computers.

What does it take to have an original thought? (0)

Anonymous Coward | more than 15 years ago | (#2032103)

Resistance is futile.

The line (0)

Anonymous Coward | more than 15 years ago | (#2032104)

Hmm.. looks pretty like what I've been waiting for ...
Let's see if I manage to get one of these into myself... human age is pretty short uknow.
Anyhow, if I'll get the chance, I'll be first in line getting one of these.

The Future (0)

Anonymous Coward | more than 15 years ago | (#2032105)

I don't want my brain integrating with Windoze 2100.

This all assumes were still here after 2000 anyway... :)

Computer thought vs Human (0)

Anonymous Coward | more than 15 years ago | (#2032106)

The thought processes of computer have been developing differently than human. We don't really want to make a whole bunch of digitized humans, we already have humans. Maybe in the future a virtual human will be created, but there will only be a couple for the novelty will where off. Computers will be developed along different lines to complement humans not copy them. As it was said in the article, the two may merge, not one takes over the other.

That's my thought
Weasel23

Offtopic (0)

Anonymous Coward | more than 15 years ago | (#2032107)

You're offtopic.

All I want... (0)

Anonymous Coward | more than 15 years ago | (#2032108)


...is an RJ-45 jack in my head.

-Joe Merlino

Oh and... (0)

Anonymous Coward | more than 15 years ago | (#2032109)

Well, it's both of theirs, isn't it? Clarke wrote the book, Kubrick directed the film, and they collaborated closely.

Silly bacteria, when will you learn? (0)

Anonymous Coward | more than 15 years ago | (#2032110)

The notion of reproducing ourselves in silicon is, I think, a dream which will never be fulfilled. To understand how the human brain works (and thus replicate it artificially), we would have to be able to view ourselves from outside the context of our own conciousness.

Godel expressed it tersely by observing that mathematics cannot hope to explain the universe because the universe contains mathematics. Thus the set of all things outside the universe by definition does not contain mathematics and we cannot express them.

I think this notion also applies to our understanding of how our brain works. Call it "spirituality" if you want, but I beleive that it is simply impossible for any human being (or even a collection of them) to understand how we think, since the act of thinking is contained by our brains.

I beleive we can enhance our cognitive abilities using computer technology, but replacing it or merging with it is simply out of the question since we cannot understand ourselves completely.

No Subject Given (0)

Anonymous Coward | more than 15 years ago | (#2032111)

"Professing themselves to be wise they became fools". If science teaches us anything of late it is that the more we know, the more we know we don't know. A faithful reproduction of the human mind above and beyond that of the Creator of the entire universe is a goal whose boundaries will prove to lie beyond infinity. For example, although we have mastered flight and can fly higher and faster than anything God created on this earth we still cannot faithfully reproduce the skill and the grace of a swallow or the nimbleness of a dragonfly and I'll wager we never will.

Underestimating evolution ... (0)

Anonymous Coward | more than 15 years ago | (#2032112)

Actually to an anthropologist, computers are part of our evolutionary ascent.

No Subject Given (0)

Anonymous Coward | more than 15 years ago | (#2032113)

I bave no Idea how technology will develope, but

"By 2019, there will be almost no human employment in production, agriculture, or transportation, yet basic life needs will be met for the vast majority of the human race"

I find hard to believe. Think about how our socio-economic system is set up. It's hard to see that happening in America and Europe, let alone globally.

AI won't advance that quickly (0)

Anonymous Coward | more than 15 years ago | (#2032114)

There are very hard AI problems that need to be solved (or finessed) before Kurzweil's predictions can occur. In the 1950s and 1960s, the basic problems of natural language understanding, machine learning, planning, perception (e.g., vision), and commonsense representation and reasoning were all established. Since then, progress has been made, but it is very slow. For example, much of my recent research has been the analysis of algorithms and representations that were developed in the 1950s and 1960s. In 40 years, we now have machines that play chess at a grandmaster level, we have passable speech recognition, and we can scan bar codes at the grocery store pretty well. That is progress, but not very big steps toward solving the hard problems.

Tom Bylander
bylander@cs.utsa.edu

Too lazy to create an account.

Might we see Iain M. Bank's Culture in its infancy (0)

Anonymous Coward | more than 15 years ago | (#2032115)

SF author Iain M. Banks created a universe were the civilization
(The Culture) is run by the Minds, AIs with capacity far
beyond human grasp. It is a positive universe, but I think that
it will be a very hard and rough way to get there.
Another SF novel I read recently that does not paint a horror
scenario (at least not from advanced AI's built by humans) is
VAST by Linda Nagata. In this universe a human being is
its mindstate. Humans can give up their bodies and live
completely in a VR world.
What I want to say is, that it is at least thinkable, that AI
systems and humans can live in a symbiosis.

I recommend both authors. The books are good reading and
give food for thoughts.

Turing Test (0)

Anonymous Coward | more than 15 years ago | (#2032116)

Humans don't have repsect for natural life, let alone artificial life.

Uh, care for some Moore's Law, John Katz? (0)

Anonymous Coward | more than 15 years ago | (#2032117)

Processor speeds double at the rate of once every 18 months. This trend will continue until the year 2005, when it becomes impossible to continue in silico. The brain of the honeybee can execute fp instructions to the tune of 100,000,000,000 GFLOPS. Computers as smart as the human brain indeed; they should try simulating a bug before striving for such lofty goals.
To suggest we are "about to become one with our computers" demonstrates clearly what occurs when a science fiction writer fancies himself a scientist.

Godel's theorem is good, but (0)

Anonymous Coward | more than 15 years ago | (#2032118)

I would suggest "Godel, Escher, Bach" by Douglas Hofstadler.

'cuse any misspellings

AI wouldn't believe in evolution (0)

Anonymous Coward | more than 15 years ago | (#2032119)

If anyone knows a thing about the 2nd law of thermodynamics they wouldn't believe in evolution. Evolutionist's have more Blind Faith to believe in evolution than any other religious belief requires.

Summed up, it is Everything degenerates, it does not spontaneously develop, or advance. We see examples of this everywhere ie paint decays to dust, people age and die.

Any careful study of evolution THEORY shows the inaccuracies, the falsifications, that are believed even after disproved. It has been the single largest source of fraud in the scientific community.

Evolution vs. Cybernetics? (0)

Anonymous Coward | more than 15 years ago | (#2032120)

We've almost done away with natural selection, if we wish to progress, we must modify ourselves. As for merging removing 'free will', the 'free will' we have now is only an illusion, unless you are betting on a soul which I assume most slashdotter's arent.

-AC

"20 years from now..." (0)

Anonymous Coward | more than 15 years ago | (#2032121)

Damnit, Katz, research first, THEN write.

"Experts" have been predicting human-scale machine intelligence "in 20 years" for the last 50 years.

You forgot something (0)

Anonymous Coward | more than 15 years ago | (#2032122)

Quantum computers in theory allow you to fit more bits per bit into a bit. Even if we get bits down to a single atom, we could keep Moore's law going for a good while yet.

Meanwhile computer programmers are getting better at having computers act like people to some extent. At what point is the simulation complex enough to qualify as a person? If you can't tell, does it matter if it's not self aware? Is there any way to tell if it is self aware? That stuff's still a good bit off but it's undoubtedly coming.

punctuation (0)

Anonymous Coward | more than 15 years ago | (#2032123)

Jon when are you going to stop using that damned Mac?

Your article is still stuffed with question marks where apostrophes are supposed to go.

Please go write for MacWorld. Or wouldn't they take you?

Underestimating evolution ... (0)

Anonymous Coward | more than 15 years ago | (#2032124)

Why can't an intelligence understand itself? All we need to do is continue to study the brain in ever-greater depth and detail, and increase the computer power available to simulate it. Assuming the universe isn't random (as I do), the solution is out there somewhere. Anthropic principle: If this problem can't be solved, we wouldn't be here.

Speed is not the only factor (0)

Anonymous Coward | more than 15 years ago | (#2032125)

Currently, computers are told by us humans what is right and what is wrong

Correct me if I am wrong but isn't that what your childhood was like?

Diminishing Returns (0)

Anonymous Coward | more than 15 years ago | (#2032126)

The problem with Moore's law in general is that it disregards the fact of diminishing returns and that you can only squeeze so many transistors in a small area before QM interactions between electrons start messing you up. Of course molecular circuitry will work just fine. The only question is will things get really cool or just kind of terminate like the telephone, the automobile, the pencil, etc.
-Rich

No Subject Given (0)

Anonymous Coward | more than 15 years ago | (#2032127)

Facts are just solidified opinion....

Not just yet... (0)

Anonymous Coward | more than 15 years ago | (#2032128)

First: You can double all the time estimates, Moore's law may well be 24 months but the culture takes a lot longer to get used to new ideas.

Second: You can radically revise upward the amount of resistance from the conservatives...look at the furor over the non-issue of abortion.

Third: What makes you imagine that super-smart computer intelligences would want to have anything to do with us?

FUD FUD FUD and bloody stupid at the same time. (0)

Anonymous Coward | more than 15 years ago | (#2032129)

1. Start by assuming the universe is not based on random factors. Therefore all intelligences are different, because we are not all mental clones of each other. Therefore there is a difference between the intelligence of any 2 people. Once these differences are discovered and a scale of measurement is decided upon, we'll have a "true" IQ test.

2. Not *yet*, maybe. Starting from the same assumption, we can show that there must be a finite and definite set of rules for every process in the universe, including intelligence. Find the rules, program them into a computer, and off we go!

Moore's law: Who says we can't have subatomic transistors? Ever heard of quantum computing? Not only is it far smaller and faster than a modern machine, but it works in a fundamentally different way so that some operations may be faster by several orders of magnitude.

How the brain works: It must work *somehow*, or it wouldn't work at all. Precise and exact application of scientifc methods will eventually solve all problems. Perhaps we'll invent atomic-scale electrodes that can be placed, millions at a time, on individual neurons. From there, it's equivalent to reverse-engineering a complex machine.

NP-completeness (0)

Anonymous Coward | more than 15 years ago | (#2032130)

Also, a lot of the progress has just been stonewalled by single, insurmountable problems. Break those down and you'll have all kinds of results. How does the brain work? Learning? Integration of senses into the neural net?

No Subject Given (0)

Anonymous Coward | more than 15 years ago | (#2032131)

We have also created machines that can cut apart a living cell and remove its pieces. Machines that can smash mountains, or build them. We have flown across space and to the bottom of the oceans. I don;t believe that the "more we don't know" is infinite. One day we will know everything.

Eon, Greg Bear, and the non-corporeal citizen (0)

Anonymous Coward | more than 15 years ago | (#2032132)

In the book Eon (and it's sequel, Eternity), Greg Bear talks about corporeal (with bodies) and non-corporeal (without bodies) citizens. The non-corporeal citizens occupy a much greater percentage of citizenship than corporeal. Albeit, it is science fiction...but the idea has always intrigued me and it looks like "The Age of Spiritual Machines" picks up on some of this.

Anyway, there has always been one fundamental flaw I believe with people that say computers will eventually be more intelligent than humans. That flaw is that computers are inherantly digital creations, and humans are not. Neurons are not merely ON or OFF. Human memory is not stored as a series of bits. I think the question of human vs. machine intelligence goes much much deeper and you have to ask yourself what is consciousness comprised of. I believe conscious thought is comprised more of an awareness of energy around us as opposed to the arbitrary storage of sensory data.

We will find out someday that as intelligent as computers get, they are not actually "self aware" the way humans are. Therefore, humans will always "one up" on the systems they create.

When Moore's Law breaks down (0)

Anonymous Coward | more than 15 years ago | (#2032133)

We are quickly approaching the point at which silicon can no longer support rapid advances in chip fabrication. There will need to be a huge paradigm shift at that point in order to continue advancing forward.

We are already seeing the size of the CPU grow, even though the pathways are shrinking. Hold a Pentium II up next to a Cyrix 486 and you'll see what I mean.

Until we make some major breakthrough that allows us to start making circuits smaller than silicon will allow, I think SMP boxes will become more commonplace. As well, massively parallel computing technologies will emerge into the consumer market. Department clusters will be commonplace, where people use their colleagues spare cycles.

Machines will start getting larger. CPU's the size of a small plate will not be uncommon in midrange systems. However, CPU sales will drop off in the home consumer market when we hit the wall just as modem sales have slumped after hitting the 56K wall.

There will be a movement in the software community to trim the fat and start writing efficient software again like we used to in the early to mid 1980's. This is already starting. While big bloated desktop environments like KDE, Gnome, and Windows 98 serve as eye candy to those with a nearly bottomless hardware budget, other who are content with a low-end Pentium system are looking towards lighter weight GUI's and more efficiently coded applications. While storage will continue to be cheap, developers will code expressly for CPU efficiency.

All in all, I think that hitting the wall will be good for us. It will cause us to make the most of what we have, instead of getting caught up in the Bigger/Better/Faster/More mindset.

AI wouldn't believe in evolution (0)

Anonymous Coward | more than 15 years ago | (#2032134)

Wow you are impressive, if you can simulate evolution and all the scientist's in the world couldn't put humpty together again.

flying car (0)

Anonymous Coward | more than 15 years ago | (#2032135)

Kinda hard to read, my flying car hit some turbulance on the way to work.

Interesting, - ramblings to follow (0)

Anonymous Coward | more than 15 years ago | (#2032136)

Well, a lot of people have said that it can't happen. I think it can and will. No, we still don't know how the brain works, but we're getting closer. Yes, the brain is massively parallel, but there is a limit; I think I've heard that each neuron has a limit of about 13 connections to other neurons. The physical limits on the size of silicon logic are being reached, but the design of the logic circuits can be changed to improve speed.

Currently, neural nets are being programmed in procedural languages, which is pretty inefficeint. (I know that lisp et. all are not procedural, but at some point it is executed as machine code, which is.) If we can etch a neural net on silicon, we will be a long way towards increasing the speed of processing.

The most difficult part of AI may end up being the instinct, or exactly how the neurons are arranged. but that probably won't be an insurmountable hurdle. As of two years ago, we were able to completely model a cockroach (sp?). That may not be intelligence, but it is a start. Eventually we will get bees, lizards, mice, dogs and humans.

AI wouldn't believe in evolution (0)

Anonymous Coward | more than 15 years ago | (#2032137)

Is this just a troll, or are you really some blazing genius who has seen past the muddled reasoning of the entire scientific community? I wasn't aware that thermodynamics applied to cummulative chemical reactions. Maybe if I slept in a pyramid and wore a power crystal, I could gain your rare insight.

2nd (0)

Anonymous Coward | more than 15 years ago | (#2032138)

"It is well known that, left to themselves, chemical compounds ultimately break apart into simpler materials; they do not ultimately become more complex. Outside forces can increase order for a time (through the expenditure of relatively large amounts of energy, and through the input of design). However, such reversal cannot last forever. Once the force is released, processes return to their natural direction - greater disorder. Their energy is transformed into lower levels of availability for further work. The natural tendency of complex, ordered arrangements and systems is to become simpler and more disorderly with time.

Thus, in the long term, there is an overall downward trend throughout the universe. Ultimately, when all the energy of the cosmos has been degraded, all molecules will move randomly, and the entire universe will be cold and without order. To put it simply: In the real world, the long-term overall flow is downhill, not uphill. All experimental and physical observation appears to confirm that the Law is indeed universal, affecting all natural processes in the long run.

Naturalistic Evolutionism requires that physical laws and atoms organize themselves into increasingly complex and beneficial, ordered arrangements.6 Thus, over eons of time, billions of things are supposed to have developed upward, becoming more orderly and complex.

However, this basic law of science (2nd Law of Thermodynamics) reveals the exact opposite. In the long run, complex, ordered arrangements actually tend to become simpler and more disorderly with time. There is an irreversible downward trend ultimately at work throughout the universe. Evolution, with its ever increasing order and complexity, appears impossible in the natural world."

Ah, yes. And according to my calculator (0)

Anonymous Coward | more than 15 years ago | (#2032139)

in 2200, with the current growth of population, we will turn into a ball of bodies with a radius of a lightyear, expanding in all directions at 99% speed of light.

But maybe that will be stopped in 2150, when a computer will be the size of Jupiter, consuming all the energy of the sun, and with an IQ of 1E+245, will eventually find a way.

Unless the computers of 2050, built on proteins and lipids, carbohydrates and minerals, will let their libido take command over their senses and make a holocaust fighting for land, power, sex and wealth. I look forward to seeing it.

What's Godel got to do with it? (0)

Anonymous Coward | more than 15 years ago | (#2032140)

Two observations:
1. No matter how advanced hardware becomes, it is far from obvious that it will acquire functions similar to those of a brain. Let's grant that intelligence, consciousness, etc. are somehow created within an extraordinarily complex device that ultimately obeys the laws of physics. Simply having more advanced hardware to work with doesn't bring us any closer to knowing what software is needed to make a computer conscious. I think these predictions depend much more on advanced in neuroscience than on advances in chip design.

2. I don't think Godel's incompleteness theorem has much bearing on this issue. Godel's theorem deals with the relationship between the completeness of a logical system (in other words, Can All True Statements Be Proven To Be True By Deduction?), and its consistency (in other words, Do Validly Deduced Statements Ever Contradict Each Other?). Godel proved that no logical system can be both complete and consistent with respect to describing number theory, and I think that the same proof has been made for Euclidean geometry. In other words, "There Must Exist True Statements That Cannot Be Proven".

Whether or not Godel's proof is meaningful outside of mathematical systems is unclear (at least to me). However, it certainly seems like a giant leap of faith to assume that Godel's proof means that human consciousness is "A True Statement That Cannot Be Proven"

Open or Closed 2nd law still applies (0)

Anonymous Coward | more than 15 years ago | (#2032141)

and I quote on open or closed

"To create any kind of upward, complex organization in a closed system requires outside energy and outside information. Evolutionists maintain that the 2nd Law of Thermodynamics does not prevent Evolution on Earth, since this planet receives outside energy from the Sun. Thus, they suggest that the Sun's energy helped create the life of our beautiful planet. However, is the simple addition of energy all that is needed to accomplish this great feat?

Compare a living plant with a dead one. Can the simple addition of energy make a completely dead plant live?

A dead plant contains the same basic structures as a living plant. It once used the Sun's energy to temporarily increase its order and grow and produce stems, leaves, roots, and flowers - all beginning from a single seed.

If there is actually a powerful Evolutionary force at work in the universe, and if the open system of Earth makes all the difference, why does the Sun's energy not make a truly dead plant become alive again (assuming a sufficient supply of water, light, and the like)?

What actually happens when a dead plant receives energy from the Sun? The internal organization in the plant decreases; it tends to decay and break apart into its simplest components. The heat of the Sun only speeds the disorganization process."

When I grew up (0)

Anonymous Coward | more than 15 years ago | (#2032142)

in the 1970s, I read books about society in the year 2000. We would all live in space, eat only pills, wear antennas on our head, be 12" tall, have alien friends in space and fly around in saucers. I still wait for a diadem-headset cellular phones. All the other I already have.

This is great news, though. Moores law predicts that I will stop complaining about my computer at 2008. And imagine: in 17 years I can play Quake in 16000x12000, 320bit colour, 1000fps!

Just write an autobio, already (0)

Anonymous Coward | more than 15 years ago | (#2032143)

I liked the book, but what I found boring was the repetition of "Oh, here's a technology that will take over the future" (i.e. voice recognition, translation software, mind-music) followed by "And coincidentally, here's the "Kurzweil XYZ Company" that I created to invest in it.

He's an interesting guy, with lots of success behind him. If he wrote an autobiography, I'd read it. But it's a bit unfair to wrap a bunch of patting himself on the back under the guise of an AI/philosophy text.

AI wouldn't believe in evolution (0)

Anonymous Coward | more than 15 years ago | (#2032144)

Huh?

It's pretty easy to demonstrate evolution in the real world. I can think of two examples off the top of my head, bacteria and communities of insects becoming more resistance and eventually immune to chemical toxins (antibiotics and insecticides). It's demonstratable, and just not liking it doesn't make it go away.

Your statement show both a misunderstanding of theory of thermodynamics and the theory of evolution. In an open system (the earth) if enough free energy is available (the sun) you can buck entropy for a limited time. Once your energy source runs out (brown dwarf, but the earth will be incinerated so who cares), the open system (charball) will succum to entropy very quickly.

The Death of Flesh (0)

Anonymous Coward | more than 15 years ago | (#2032145)

Build me a robot body, scan my brain and reconstruct its connections in a computer simulation, install it in the robot body.

Great, everlasting life.
Except its not my everlasting life, it's the simulations.

Like it or not I am going to die in this fleshy cage. When I do go I'm sure as hell not going to leave a simulation of myself to mourn me.

What limits? (0)

Anonymous Coward | more than 15 years ago | (#2032146)

They've been saying Moore's law is going to run into a brick wall (10 years out) for at least 20 years. They've been predicting a brick wall for photolithography for at least 10 years ("the limit is the wavelength of light" - by which they meant visible light, forgetting UV etc.)

You talk about the speed of "a" serial processor. MPP now means thousands of processors - why not billions? Umm, Beowolf ring any bells?

We can simulate living things now (cockroach, worm, paramecium or whatever), monkey-level intelligence is a slam-dunk.

The interesting questions are:
1. Are humans really different from other animals?
2. Can computers have emotions?
3. Can computers create art (Linux kernel, Mona Lisa, etc)

My answers are no, yes, and yes - but not in my lifetime (I hope).

Honeybee reference (0)

Anonymous Coward | more than 15 years ago | (#2032147)

"Does God Place Dice? Mathematical concepts of chaos theory", Ian Stewart afaik.

natural selection != evolution (0)

Anonymous Coward | more than 15 years ago | (#2032148)

the examples you cite are of natural selection, not macro evolution/mutations.
the two are not the same, just as industrial melanism (pepper moth example) is not an example of evolution in action.

A few things... (1)

Enry (630) | more than 15 years ago | (#2032232)

1) AI is going to need some *major* advances in the next 20 years, once that I don't think will happen fast enough. For one thing, AI now is still the same it was 20 years ago.
The best the computer could do was learn from its mistakes, and anyone who ever took a programming class can write something like that. Noone is still quite sure how the brain works, and until we understand that, we can't duplicate its functionality.
Take a look at a real AI example: speech recognition. It's been around for years, and the technology and accuracy is improving, but it still can't handle context of words. Go try the latest version of Dragon Dictate as an example.

2) Technology is starting to hit its limits. NPR had a report a few months ago about the fact that with chip manufacturers using smaller and smaller chips, there's no way to chips them without using X-rays (someone with a better knowlege of this back me up here). This indicates that Moore's law may be running into a brick wall in the next few years. With a limit in growth of CPU horsepower, you'll start to see limits on AI, since you need a lot of CPU speed to try and emulate the human brain.

Nothing new.... (1)

gavinhall (33) | more than 15 years ago | (#2032233)

Posted by HolyMackeralAndy:

Over ten years ago I saw Timothy Leary speak. He discussed this very thing. Nothing new.....

EEEK! Borg? (1)

gavinhall (33) | more than 15 years ago | (#2032234)

Posted by jonrx:

:)

Go study some logic first (1)

bluGill (862) | more than 15 years ago | (#2032237)

Before you belive this go to a university and study logic. I did it in Math, but I understand philosphy has similar studies that aren't as difficult. All these arguements needs to be reconciled with Godell's incompleteness therom.

I think electricial engineers will tell you that moore's law isn't expect to hold our that long because the size of atoms is larger then the predicted size of a chip. Note sure exactly here since that isnt' my field.

I welcome the days when comptuers can do the boring difficult tasks. (there will be farmers though, but farmers have never made money, and so robots killing any possibility of money doesn't stop them.)

Musicains regularly work with computers to create music. I prefer the sound of accoustic music though, and I have heard musicians who cannot play a keyboard and never will be able to becuase they are outplaying a mechanical piano and the comptuers can't capture the feeling of a real piano.

Comptuers will help. The disabled will love the new mobility. The rest of us will enjoy other benifits. They won't take over. They can't. Go see some logic.

I suppose that this book will appeal to those who know nothing about technology. They buy (and belive) the national enquirer, the weekly world news and similear pappers.

Nitpicks and humbug in general (1)

Phaid (938) | more than 15 years ago | (#2032238)

First, a couple of facts:

It's not Kubrick's 2001 - it's Arthur C. Clarke's. Kubrick just did the movie.

Second, none of these predictions are new, and they're not particularly original. Ever read the cyberpunk authors, like William Gibson, Bruce Sterling, or Walter Jon Williams? They all predicted these things in the early '80s (or even before). I'm sort of underwhelmed by someone who came along nearly 10 years later and "predicted" these very same things, when in fact we were well on our way to fulfilling them and the "trends" weren't very hard to see at all.

The term Neo-Luddite is lifted from William Gibson, by the way. (but then, so is the term Cyberspace, so that's probably OK).

"The Age of Spiritual machines" surpasses most futuristic predictions of sci-fi writers and technologists.

Wrong. It apes them, and not very well from the sound of things. Sounds like John Katz needs to do a bit more reading before extolling the virtues of this kind of literature. It raises some interesting points, yes, so it's fodder for good discussions, but at the same time it's not original and doesn't seem to bring anything new to the table.

I'll check out a used copy sometime.

Brain (1)

Andrej Marjan (1012) | more than 15 years ago | (#2032239)

Go read up on neural nets. Go read up on cognitive science -- yes, all the disciplines besides AI.

You'll notice something interesting: the engineering endrun attempted with AI has been failing miserably. Strong AI -- which is what you're talking about -- has been following in the footsteps of cognitive psychology and philosophy since its outset: it has repeated all the same mistakes, fallen into all the same fundamental quandaries... At least it's crystallized the frame problem.

Also of interest is that many of the big names behind computational functionalism as a theory of cognition are now jumping ship. These would be the founders of the field.
--

Human intelligence... (1)

Phil Gregory (1042) | more than 15 years ago | (#2032240)

When was the last time that you saw an object move in digital space?
Just now, probably. The best of our current understanding indicates that the universe is, at its basest level, digital, with the "sampling rate" being Planck time. I mappen to think that digital can be better than analog, if the bitrate is such that I can't tell the difference between it and analog.

I do think it'll be a while before we understand how consciousness works.


--Phil ("The Meta-Turning test defines a species as intelligent if it seeks to apply Turing tests to devices of its own creation.")

Get rid of the Gas Bag (1)

heroine (1220) | more than 15 years ago | (#2032241)

This guy obviously is bored.

FUD FUD FUD and bloody stupid at the same time. (1)

Jon Peterson (1443) | more than 15 years ago | (#2032242)

Anyone who says that the human brain is 'a million times more intelligent' than a computer is a twit, end of story.

1. Just how do you measure intelligence? (see the SJ Gould book 'the mismeasure of man' for discussion on why you can't). (And the recent /. IQ vote)

2. This assumes that computers are intelligent AT ALL. They are no more intelligent than a rock, just more useful for some tasks.

While I'm at it, Moore's Law is heading for the rocks, because it's pretty obvious you CAN'T keep doubling the transistors/area forever, unless you invent sub-atomic transistors, which seem just a little unlikely. We are already nearing the barrier caused by the indivisibility of atoms - some storage devices have a bit density approaching the atomic density of the medium (IBM are getting close to one bit stored on just a few molecules)

But MOST OF ALL:

NO-ONE has even a vague shadow of a theory on how the brain actually works. We have no idea what it does, or how it does, much less how to start imitating it. Functional PET scans vaguely indicate that certain bits of it are more concerned with some functions than other bits, that's not very good is it?

Kurzweil can join Negroponte in the pit of fools who predict the exciting so as to get fame and media attention. Twit.


P.S. Just don't get me started on how genetic algorithms and neural nets are going to save the world of AI, because they probably aren't.

Wrong. (1)

Jon Peterson (1443) | more than 15 years ago | (#2032243)

" Obviously the amount of logic necessary for human intelligence fits inside the human brain"

You assume:
1. Logic is a necessary and sufficient requirement for intelligence
2. The mind is the brain

Today we compare the brain to computers
Before that we compard the brain to clockwork
Before that we compared it to a windmill

We don't seem to be learning. There is nothing about consciousness that suggests it requires a physical mechanism. There is simply a numerical correlation (not necessarily causal) between the existence of minds and the existence of brains.

Neither is proven or obvious. This is why more scientists should read philosophy (and vice versa)

future check - (1)

jafac (1449) | more than 15 years ago | (#2032244)

hm. it's 1999. is it the future yet?

Let me go down the checklist. . .

flying cars? nope
robot maid? nope
eternal youth? nope
matter transport? nope
cashless economy? nope
truth detecting machines? nope

Gee. Maybe by 2020, we'll have flying cars, and maybe Mr. Spacely will give me a raise so I can buy one. . .

Price Check (1)

Ralph Bearpark (2819) | more than 15 years ago | (#2032263)

The "Buy this book here" Amazon link is oddly directed to an overpriced "The Age of Intelligent Machines" rather than the more reasonably priced subject of the review "The Age of Spiritual Machines".

Possible causes include a) /. trying to boost commissions from Amazon b) editor put to sleep by Katz review or c) plain-old mistake. Naturally I'll plump for c) :-)

For "Age of Intelligent Machines":
Amazon $35
BarnesAndNoble $35
Shopping $19.25
Kingbooks $21.50
1BookStreet $24.75

For "Age of Spiritual Machines":
Amazon $15.57
BarnesAndNoble $14.97
Shopping $16.86
Spree $14.97

(Interesting to see that Acses now seems to use Shopping too. Still no BookPool or Spree tho.)

Regards, Ralph.

So was "From Earth to the Moon". (1)

Squeeze Truck (2971) | more than 15 years ago | (#2032265)

And look what that got us! Pissed-off moon men, that's what!

Meat Machines (1)

Andy (2990) | more than 15 years ago | (#2032266)

I fail to see how a simplistic straight line exrapolation of empirical "laws" like Moore's law lead one to estimate the arrival time of computer based intelligence.

How about this for a law (Andy's Law).

"He who can program a thing, understands a thing."

The inverse is also true:

"He who understands a thing, can program a thing."

We do not have even a rudimentary knowledge of the nature of intelligence. Until we do there will be no real AI.

Another silly prediction.... (1)

richieb (3277) | more than 15 years ago | (#2032268)

AI proponets have been predicting this stuff since early sixties. They just keep changing the date.

It's not the speed of processors or the amount of memory that needs to be compared to the human brain, it's the software. And nobody has any idea how to write it. We can't even agree on what is intelligence (see the IQ discussion) and somehow he expects to program intelligent machines.

I also predicated in the early 80s that Internet would be a big thing and that a computer program would become world champion chess player. I just didn't write a book about it. It was obvious then.

As far as jamming with "cybernetic musicians" I'm already doing that with drum machines and my computer.

...richie

P.S. For an interesting take on the philosophical problems of AI I suggest reading books by Stanislaw Lem.

A Very Bad Idea (1)

Peter La Casse (3992) | more than 15 years ago | (#2032269)

Augmenting humans using implanted computers, that is. Using the ability to vastly increase human capabilities could be very bad because human beings will still make mistakes. As an illustration, if someday people are able to plug right into their cars and have super reflexes, etc., there will still be lousy drivers.

Essentially, I don't want anybody to have near-infinite knowledge without near-infinite wisdom, and nobody's perfected a way to prevent human beings from making bad decisions. When they do, I'll be the first in line to have my own abilities massively augmented.

Responding to a Creationist troll..... (1)

Bret (5207) | more than 15 years ago | (#2032270)

Anyone who understands the 2nd law of
thermodynamics knows that it only applies to
CLOSED systems. The Earth is not a closed
system. The energy released by the nuclear
reactions in the Sun allows any system which
can make use of that energy to decrease in
entropy. Meanwhile, the Sun is increasing
in entropy at a much faster rate. The net
change is that the entire Solar System has
increasing entropy, even though some individual
portions of the Solar System have decreasing
entropy.

Using your argument about the 2nd law, it
is possible to prove that BIRTH is just a
myth, because we can't create a new being
out of the disorder of food, water, and air.

Turing Test (1)

Phil Wilkins (5921) | more than 15 years ago | (#2032272)

Who says it has to be human? One word, Furby.

(Wish I could remember the name of the robo-kitty)

the irony of it all.. (1)

SuperGeek (6272) | more than 15 years ago | (#2032276)

.. is that man seeks to become more machine by making machine more man..

-

FUD FUD FUD and bloody stupid at the same time. (1)

David Cramer (8711) | more than 15 years ago | (#2032288)

Artificial Intelligence is Stupid!

Talk about the pit of fools. I once got into a violent argument with Marvin Minsky a few years ago at a conference in Vancouver. My basic appraisal of Minsky is that he was quite excited by the possibility of creating computers that would take the place of human beings, because he had a very limited appreciation for what it meant to be human. I figure his Mom must have treated him badly or something.

You know the old saying that if all you've got is a hammer, every problem looks like a nail. Well, if Minsky had been a hammer maker, he would have called everyone Spike.

I also like to laugh at Microsoft's pathetic hyping of natural language interfaces. Fortunately for their competitors (apparently most of the rest of humanity, according to their legal team), their billions of dollars of research will be based on the same fundamental blunder, the belief that human thought -- and speech -- is just computer processing with a big wet mushy chip. The tragic part of this misperception is that it indicates just how dehumanizing technology can be to its chief acolytes; or did they start out that way?

Turing Test (2)

David Cramer (8711) | more than 15 years ago | (#2032290)

Let's first of all understand that Turing's famous test is the ultimate blonde of ideas, cute but vacant.

Turing himself suffered from an encrypted brain, having lost his privates key down the commode in a neighborhood pub during an ugly episode, which resulted in his having to goto the hospital for an enigma. When he was done, he looked like he'd been through a World War.

One big problem with his test for Artificial Intelligence is that is wasn't reflective enough. The question he didn't answer is what would it prove if a computer could spend an hour talking to Giraldo on the other side of the wall and not be able to tell it was a human being. And if the Giraldo suddenly stopped talking in mid-sentence, would the computer assume its counterpart had Blue Screened, Sad Mac'd, or Core Dumped? And would it contribute to his annual support contract fees? Would they marry and spawn threads?

What he also didn't follow up on were some of the broader implications of the test. For example, what would it prove if Turing spent an hour talking to a computer on the other side of a wall and wound up lending it five quid? Articial Stupidity?

Technohype *yawn* (1)

Joe Cool (9220) | more than 15 years ago | (#2032292)

Same thing was said thirty years ago.
AI is a dead end. Go with Stanislaus Lem, he mentioned "Artificial Instinct", seems much brighter.

Alas the Shadows (1)

Zathras (9441) | more than 15 years ago | (#2032293)

Merging humans with technology will remove autonomy and free will. Even a slave today still have free will to some degree. You can tell me what to think. You can even brain wash me. But I can still dream.

The machine doesn't dream ...

---

Turing Test (1)

Cassius (9481) | more than 15 years ago | (#2032294)

It would behoove everyone to understand the simplest meaning of the Turing Test:
if you can't tell the difference, there is no difference
of course humans won't have any respect for the artificial life they create until they anthropomorphize it somehow, byt sticking the computers inside human-looking shells, or giving them human names.

Interesting... (1)

malkavian (9512) | more than 15 years ago | (#2032295)

Well.. Most of what he writes, I consider destined to happen.. And I've felt that way for a long time (Artificial life was my specialist subject at Uni, and I've followed it since)..
The idea of packing transistors onto silicon, yes, I can see that hitting a limit very soon, as has been pointed out.. That's about the time that quantum devices take over. And that's a whole other kettle of fish..
It seems that people here forget about the other computational media available. Optical gateways, Bio-computers using neurons, quantum devices etc...
I agree that the law of doubling will fail soon. but I also consider that it'll result in the increasing of power by an order of magnitude. The same effect as leaving a horse and cart for a rocket engine.
As for intelligence... Who is really to define it?? It's stumped the greatest philosophers for many centuries now, and I think it'll carry on doing that, albeit more heatedly now, for centuries to come.
When you say that machines will take millions of years to evolve, or that they never will become sentient, consider out origins...
Small molecules that grouped together in a protein soup... That slowly learned how to replicate themselves, and form copies..
From there, in geological terms, the rise to sentience of the human race was quite fast.
And also, how do you rate the intelligence of humanity? We may be 'intelligent' now.. but do you consider cro-magnon man as intelligent?? And before that?
Where did we become intelligent?? At what point?
Is a Fish intelligent? If so, would a machine that has all the drives and behaviour of a fish be any less intelligent?
Machine learning will arise much much faster than did biological intelligence.
It's being nurtured carefully by a parent species.. Most of the people who have studied Alife have been surprised by the behaviour of their constructs.. Watching them behave in ways totally unexpected.
History is littered with people saying 'It can never happen, you're deluding yourself'... Flight was never possible.. Humans would never travel over 30 miles an hour, as that would prove fatal... The view that no weapon could be more powerful than the bow, as the destructive power would be truly unthinkable...
All commonly held beliefs at some points in time..
And relatively recently at that.
Currently, we exist in a society that views Alife as a threat; something to deny and deride. It's in a lot of human nature to destroy that which it does not understand.
In time, the next generation will grow up in a world where it is becoming commonplace (as is already starting to happen, at a truly basic level), so the concept will be less alien.
In the generation afterwards, it'll be accepted as a standard, and they'll laugh at the old views of the primitive society that couldn't comprehend every day life. Much as someon in the late 19th century couldn't comprehend an office worker going to work in the morning, sitting behind a computer, emailing documents halfway round the planet in seconds, and retrieving other information from another country inthe same timescale.. then looking through the 'eyes' of a mechanical construct that sits on another planet, which mankind has put there to gather information.
We take this for granted. Two generations ago, this would have been unthinkable..
I'm not sure of the timescales presented in that passage... But I firmly believe that what is proposed in it is not only a possibility, but an inevitability.
As for the idea of human/computer cybernesis at the cognisent level restricting your thinking... I'd beg to differ..
Once we obtain that level of understanding of the brain that we can actually bond memories/thought patterns into understandable/transmittable patterns, we get the closest to telepathy/telempathy that is possible.. The sharing of experience and emotion through a 'computer' link. The ability to fast process thought.. Raising intelligence by orders of magnitude.
As for the machines deciding to 'dispose of us'.. I find it unlikely...
The most optimal survival pattern is co-operation.
Time after time, this has been proven, both in theory and test.
Humanity, sadly, still clings too hard to it's origins in a simian style.. Rationally, I belive most people understand the true value of co-operation, but psychologically aren't equipped to live life fully in this way..
The hybridising of man with machine is a natural evolutionary step for a tool-using species.. We've used physical tools for all the years we've used physical force to achieve our wonders.
Now, we develop tools for the intellect.
The industrial revolution for the mind..
We've got as far as we have, because we're able to change.. And we'll make the next steps for exactly the same reason.
It's the point where we can become the greatest of our dreams, or the worst of our nightmares.
And sooner or later, we'll have to decide which..

Malk

Speed is not the only factor (1)

NYC (10100) | more than 15 years ago | (#2032298)

"The most advanced computers are still much simpler than the human brain, ..., Kurzweil predicts, with computers achieving the memory capacity and computing speed of the human brain by approximately the year 2020."

Speed and size are not the only relevant factors in intelligence. What computers lack is the ability to learn. How can it analyze the feedback from its own actions and determine if the outcome was sucessful? Although many researchers are tackling this problem (neural nets, HMMs), any real learning is still far away. Currently, computers are told by us humans what is right and what is wrong. If we are able to provide computers with all the correct repsonse to all the possible outcomes in the world, then they will be truely smart. Look for example at Deep Blue. How exactly did it beat Kasparov? By using brute force to identify all next possible moves. The outcomes of these moves were poredetermined weights established by chess Grandmasters. Intelligence my ass....

Speed is not the only factor (1)

NYC (10100) | more than 15 years ago | (#2032299)

"Correct me if I am wrong but isn't that what your childhood was like? "

Exactly. My mind has the ability to learn. I am able to infer what is right/wrong now based upon what I learnt as I child.

Intelligent machines.... (1)

yogiBear (10125) | more than 15 years ago | (#2032300)

Does the alleged machine intelligence mean that
we are going to get rid of Microsoft? Or will
there be Windows BD (BrainDead) for intelligent
machines which will significantly decrease their
intelligence, so that "an average person" can use
the machine?

mount /dev/brain (1)

snuh (10197) | more than 15 years ago | (#2032301)

Pretty cool stuff. We've certainly been thinking about it for a very long time. I just hope the machine would want to leave my shared wetware when we're done 'sharing'. Other than that, hook me up.

"The Age of Spiritual Machines" is a wake-up call" (1)

K. (10774) | more than 15 years ago | (#2032304)

It's about time you got one.

Other posters have already pointed out most
of the fundamental problems with the predictions
made in this book. I'd just like to add that in
the event of such technologies being developed,
we should all chip in and buy Jon Katz a critical
thinking module.

K.
-

Van Neuman is dead (1)

Hish (11070) | more than 15 years ago | (#2032306)

The argument that Moore's law is going to break down, therefore there will not be artificial intelligence is wrong. There are still many areas that we can expand computation. Chips right now are mostly 2-D, what happens when we can make them 3-D. We have also only begun to explore multiprocessing platforms. Obviously the amount of logic necessary for human intelligence fits inside the human brain. It doesn't matter if instead of taking 20 it takes 40 years to create a computer capable of intelligent thought.

Something is going to happen when that much logic is put into a single computer, or network of computers. It won't be the best scenario, and it won't be the worst. Perhaps by thinking about it now we can make the transition to whatever comes a little easier.

2090? Will there be one? (1)

PMoonlite (11151) | more than 15 years ago | (#2032307)

Ha. We are scheduled to reach the Singularity in the 30's. Barring infrastructural meltdown in Y2K, of course.

Techno-hype (1)

Natedog (11943) | more than 15 years ago | (#2032322)

I won't say this will never happen, because you never know (the only thing I would be willing to predict will never happen is the ability to create or destroy energy/matter). But, here are two reasons I don't expect this to happen in the near future.

1) The exponential growth in the spead of CPUs *must* eventually hit a physical limit so we can't predict how fast CPUs will be in 20 years. Even if some sort of super-conductor or fiber-optics were used and even if the bit logic was on the atomic level we would still hit a physical limit.

2) Even if CPUs were fast enough (which the above *may* be) - you still have the software problem. So a computer won someone at chess - big deal. The logic for "learning" chess had to be created by someone (ie: not another computer) and it only learns from mistakes. Further, this AI can *only* learn chess - if someone wanted it to learn checkers, they would have to program that into the computer. For AI to reach that of humans, it needs the ability to learn *how* to learn. In other words, a computer would have to program itself to play chess - this will be the true test. But even more than this, the computer will have to take the inititive to learn. These are key differences between humans and all other life forms we know of. Humans have the ability to learn/discover new things, inovate on existing ideas, and then pass these ideas/skills to other humans - all other animals do not. For example - there are certian animals that use tools (most often a twig or leaf) to aid in the gathering of food. However, the tools are never improved and no new tools develop out of these existing tools - their use is instinctual. So if a computer must be programed every time it wishes to learn, it is not inovating or learning new things, it is just acting on what it already knows how to do (ie: it is acting on instinct)

Anyhow, I appoligize for the over general ideas, but I think you see my point. Now let the flames begin!

I would have to agree with Godel... (1)

Natedog (11943) | more than 15 years ago | (#2032323)

math cannot explain the universe, it can only explain how certain aspects of the universe work. For example, the speed of light is constant regardless of point of reference - this is the basis of the special theory of relativity and why we know time is relative to speed and position. However, mathematicaly this does not make sense. Also, consider all the energy and matter in the universe. They either existed for all time (however, the decomposition of elements seems to suggest otherwise) which is totally illogical, or it came into existance, but this goes against the *laws* of conservation of energy and matter.

Will Moore's Law Fail? (1)

rkms (12026) | more than 15 years ago | (#2032325)

I read an article recently that highlighted the
difficulties of maintaining the Moore's Law
momentum. Basically, it was the quantum-effect
problem retold, and suggesting that there may
be a flat spot coming while these difficulties
are overcome. However, the article also
suggested that the mass market for processors
may have to shift away from computing devices
and into more general appliances in order to
keep the economic momentum going.

I happen to believe that there will be a
relatively small (one or two year) glitch in
the curve, but that the push will continue
all the way down to nano eventually.

As to the economics of CPU (memory/IO/storage)
production - there are an awful lot of people
who still have no personal access to the
technology that we (the Slashdot We) take for
granted.

Be realistic (1)

joshv (13017) | more than 15 years ago | (#2032326)

Downloading the human brain? Come on.

Realistically - Yes, human will eventually be replaced in most production environments. Totally automated factories are probably within reach in the next 20 years. The social challenge will be how to re-distribute the huge profits this brings to those that don't have the skills to get a job in a non-production environment.

Yes, computers will be blindingly fast in 20 years time. Capable of simulating insect-like intelligence at a level sufficient for automated carpet cleaners that run around at night without bumping into things, or driving our cars for us on the free-way (yes, a silicon insect intelligence can probably drive better than we do, it is always paying attention)

Human like intelligence in silico? Sure, when you have a computer that can accurately simulate the human brain - all the neurons, all the connections, and do it in real time. Even if moore's law continues until 2100 I don't think we will get there, as our computers are serial, and the brain is massively parallel. Can anyone who knows more about this than me estimate what kind of processing power it would take to simulate a neural network the size of our brain on a serial silicon chip? If we come up with a way of creating billions of artificial neurons and actually physically wiring them together, then we might have something.

Then there is the problem of programming this brain...

-josh

A few things... (1)

r (13067) | more than 15 years ago | (#2032327)

it looks like the former is a much bigger challenge than the latter. the problem is that the human brain is a massively parallelized multiprocessing machine, and emulating it in a serial (von neumann) processor will be a futile attempt. we need to develop algorithms that use the serial characteristics of the processor, rather than try to improve hardware...

neural networks? bah! (1)

r (13067) | more than 15 years ago | (#2032328)

simulating a human brain in hardware? that has to be the silliest myth of modern computing.

just thinking of it this way - human brain is a massively parallel machine. computer processor is a serial machine. simulating the former in the latter will be linear in the number of neurons simulated, but exponential (and possibly with pretty big constants!) in the number of connections between them! we can't even properly build neural networks of size of an insect brain, let alone human. besides, there's the small issue of neural networks being 'opaque' to the creator - all's good when they work, but when they break it's difficult to figure out why.

it seems that we'd get farther by concentrating on advancing 'traditional' symbolic artificial intelligence, rather than simulating huge neural networks on puny serial hardware...

r
-- "away, connectionism!"

AI? Where's the real I? (1)

jabber (13196) | more than 15 years ago | (#2032330)

I remember the discussions that were the best part of my AI class. The difference between intelligence and knowledge. How do you separate the two, in terms of computing? How do you even define the former - except maybe that intelligence is the ability to synthesize and validate new knowledge from old knowledge.

Computers will excell at that which they already do well, data processing. They'll be able to provide medical diagnoses based on symptoms, financial modeling and weather forecasting, all sorts of quantitative tasks which humans can't do as fast. They'll handle lots of raw data, distill it, and act on it. Pretty much what we do, isn't it?

Peripherals will broaden our senses, enhance our experience of an increasingly computerized world, and make us realise that there are some things that just can not be replaced by a machine.
Consider an AI social worker or psychiatrist. It maybe be an interesting AI exercise, but could a person get over the hurdle of using one to actually solve an emotional problem? Would anyone go to confession, knowing they're spilling their guts to a microprocessor, that will then use a hash table to look up the appropriate penence?

No, computers will become prevalent, and we will become dependent on them, but not more so than we have become dependent on machines. The industrial revolution invited speculation similar to this. We're still human.

It was once thought that people would not be able to breathe on a steam locomotive. It was thought that the rush of air would suck air out of our lungs. We now fly across the Atlantic in 2 hours, and we've been to the moon. It was thought that the first nuclear test might ignite the atmosphere, and all life would cease.

We're still here, and we're still human. And we will forever be human. Evolution took billions of years to get this far. How self-righteous we are to think that we can accellerate it! We are a product of it. We are, at this point, the epitamy of evolution; and if we have anything to say about it, we will remain top-dog for a long, long time. Evolution has bestowed upon us a great gift, greater than on any other species. We can radically alter our environment. Not ony do we adapt to it, we adapt it to us!

That's all that will happen. We've created computers, and made them part of our environment. We're adapting them to service us (a'la Borg), as we adapt ourselves to use them to our benefit.

They will no more change us, then we have changed the Earth. Hmmm, maybe that's not such a good point after all. :)

Jihad (1)

espace (13537) | more than 15 years ago | (#2032333)

In about 30 years I plan on leading a jihad to remove all computer controlled equipment from everybody that is stupid. When this great day nears, I hope that I will have many followers from /.
Sci-fi and technology just don't mix with stupidity. It's almost as bad as drinking and driving. Arghhh, let it go. There is no future for computers, as we shall smite them from this earth.
I'm really tired, excuse the outburst, need coke. Oh yeah, and I have to run an NT based network, lots of computer angst.
Dinyar

It is techno-hype -it's ignorant (1)

Prophet (13824) | more than 15 years ago | (#2032334)

My machine is already spiritual - I have given it this ability, it is my own spirituality. We will not evolve into a new species, we will evolve - as will our science.

Smart computers? Yes. No distinction between humans and computers? Highly improbable. Even if I incorporate technology inside me, it is just technology being used by me.

The other concept that is frequently abused is the difference between information and knowledge. Information can easily be referenced, accessed and accumulated by computers -usually faster than humans. However knowledge, the EXPERIENCE that comes with information is somethings humans do better, and I suspect always will.

I don't believe that humans (or any biological life) would be innately superior to computers (or any mechanical life) that were equally as functional. Biological life is not more sacred than the machine. I do believe we are more spritual. We have something that cannot be replicated, built, or manufactured no matter what we evolve to - a spirit.

super human computers (1)

josepha48 (13953) | more than 15 years ago | (#2032335)

I never wonder if we could make them as smart as us, but always wonder IF we should?

Didn't anyone watch Evolver on sci-fi? or Terminator and T2? Or more recently Virus????

Hey HAL tried to killed everone on that ship.

I for one hope we don't become a race of cyborgs!

Go study some logic first (1)

cnicolai (14338) | more than 15 years ago | (#2032336)

Unless you're implying that the human brain is not a machine, that it doesn't think by physical laws, Godel's theorem doesn't rule out AI. In other words, our brains are just as limited by Godel's theorem as AI's would be. See _Godel,Escher,Bach_ by Hofstadter for details, and also because it's great.

Chris

Human intelligence... (1)

Jasin Natael (14968) | more than 15 years ago | (#2032337)

This comes back to a classic argument that I've seen on slashdot, and in many other places: Is digital _really_ better than analog? When was the last time that you saw an object move in digital space?
We have determined taht the primary cause of human intelligence is not genetics or design, but the training that goes into it. Until we truly understand how our own brains work and can selectively train individuals to be intelligent, there is not a reason in this world to beleive that computers can attain any measure of intelligence. Excuse the coldness of this example, but I can buy a dog for $35 from the humane society, and it can actually interact with me and understand what I say. It's also completely devoted to me (not like windows...).
My point is this: We've been studying lab mice for how many years, and we can't even understand how the intelligence of a mouse works? How, then, do we presume to say that we will understand the nature of our own intelligence well enough to duplicate it in a machine?
The computer has its purpose, but I doubt that digital technology will be able to produce a machine with the intelligence of my cocker spaniel, much less that will rival my own. And, if the day arrives, hopefully I'll have perished.

Brain (1)

kaisyain (15013) | more than 15 years ago | (#2032338)

> ... increase the speed of the signal. Is the speed of light fast enough for you?

Electrical energy already travels at the speed of light. Optical computing has problems of its own, however (not the least of which is the fact that it is difficult to make it do digital cheap and fast).

I am ready (1)

Master Switch (15115) | more than 15 years ago | (#2032340)

I am ready to integrate, hook me up :) In a way, we already are one with our computers. Just ask any Linux nut.

Godel's theorem is good, but (1)

alienmole (15522) | more than 15 years ago | (#2032344)

"You can't possibly ask Katz to read math books."

LOL!

You've hit the nail on the head about Katz. To report on a subject in a way that adds value, you need a pretty good understanding of the subject - unless you're writing for an audience with an even more limited understanding. Some rudimentary analytical skills also don't hurt (such as, say, the ability to distinguish between problems caused by UPS during shipping, vs. the intent to install Linux on a machine...)

For an example of a writer adding value to a subject (while simultaneously doing a bit of a snow job), check out the article by Michael Lewis (author of "Liar's Poker") in last weekend's New York Times Magazine, about the near-demise and bailout of Long Term Capital Management.

Of course, Lewis probably gets paid more than Katz, but if so there's a reason for that...

neural networks? our only known hope! (1)

alienmole (15522) | more than 15 years ago | (#2032345)

"it seems that we'd get farther by concentrating on advancing 'traditional' symbolic artificial intelligence, rather than simulating huge neural networks on puny serial hardware... "

Traditional symbolic AI is unlikely to ever achieve much than the traditional applications, e.g. grammar checking, grading essays, playing chess.

You're right about the advantages of the massive parallelism of neural networks. So, we have to build *real* neural networks in hardware (or wetware?), not simulate them on serial hardware (which as you point out is doomed)

"there's the small issue of neural networks being 'opaque' to the creator - all's good when they work, but when they break it's difficult to figure out why."

That's what psychologists are for...

Is Jon Katz an Artificial Intelligence? (1)

alienmole (15522) | more than 15 years ago | (#2032346)

Is Jon Katz an artificial intelligence? It took me way too long to work it out, but all the signs are there. The liberal use of the latest buzzphrases, to hide lack of actual content. The slightly off-base parotting of well-known concepts, as though he doesn't quite get it.

It also explains that interminable "Road to Linux" series which never manages to achieve actual use of Linux. Since the Katz software is presumably running on Linux, for Katz, writing about Linux is like exploring one's own subconscious - a task which can only be done incompletely, at best.

Hats off to CmdrTaco for an awesome piece of coding! Is it written in PERL? What hardware does Katz run on? (Hmmm, wonder if that matchbox-sized computer has anything to do with this?)

Alas the Shadows (1)

Daiv (15693) | more than 15 years ago | (#2032350)

Free will is an illusion anyway.

Merging with computers would only make that more obvious.

AI Advances? (1)

mlf (15706) | more than 15 years ago | (#2032351)

The same things were said 20 years ago, and 30 years ago, and look where we are now... Brute force chess playing programs.

As for the Turing test, yes, it might be possible to program a computer to fool a human, but does that mean that the computer can think? Does it mean that is has a consciousness? Does it mean it should have rights? Or does it just mean that it can fool humans... The Turing test is one of the most flawed AI tests, but it also happens to be the most famous.

I used to give the AI field the benefit of the doubt, but after taking a course in the philosophy of AI, I am a skeptic. There are so many things in volved in our thinking and consciousness that even we do not understand, how can we get a machine to do/become what we can't understand ourselves?

Underestimating evolution ... (1)

rjb (137100) | more than 15 years ago | (#2032360)

Just another set of predictions about something that isn't nearly understood. Just like every other prediction of related "artificial intelligence", the mountain will suddenly appear bigger the closer we get to it.

I'm betting on millions of years of evolution ...

Underestimating evolution ... (1)

rjb (137100) | more than 15 years ago | (#2032361)

On the scale of things, the human brain is Mount Everest and these predictions are just folks at a rest stop on the Jersey pike staring though a fogged window in the mens rooms at a torn post card from Tibet (that they are sure is the real thing).
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...