Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Just Around the Corner... 88

Anonymous Coward writes: "Ray Kurzweil and other digerati discuss when popular sci-fi concepts will manifest in the real world. See part I or part II."
This discussion has been archived. No new comments can be posted.

Just Around the Corner...

Comments Filter:
  • by PhantomHarlock ( 189617 ) on Saturday October 20, 2001 @05:06AM (#2454203)
    I can't believe they used that scene from Independence Day as an example. It's the worst, most banal attempt at science fiction that hollywood has ever made. How much did apple pay to have their laptop in it? The idea of Jeff Goldblum as a '133+ h4x0r with a magic powerbook is worse than "This is a unix system, I know this!" from Jurassic Park.
    • that scene was awesome!


      if(shields == 1)
      {
      // disable alien shields
      shields = 0;
      }


      virus writing 101.
      • by Anonymous Coward

        if(shields == 1)
        {
        // disable alien shields
        shields = 0;
        }


        This is the easy part. Writing a gcc backend to output alien assembly is the (very very) hard part.



      • if(shields == 1)
        {
        // disable alien shields
        shields = 0;
        ShowBitmap(JOLLY_ROGER);
        }


        • You missed one thing. They didn't show this part so they could maintain the PG-13 rating, but you know they would have appended this to the code on behalf of all our human abductees.


          if(shields == 1)
          {
          // disable alien shields
          shields = 0;
          ShowBitmap(JOLLY_ROGER);
          probe(alien, anus);
          }

    • by gusnz ( 455113 ) on Saturday October 20, 2001 @05:56AM (#2454248) Homepage
      Scene from the alien mothership:

      Alien Commander: Are you sure this thing is secure?

      Alien MCSE Tech: Trust us, it's unhackable. We built it with our reliable DRM 2 encryption code, and we've told the puny Earthlings not to publish exploits...

      :)
      • by Anonymous Coward

        Alien MCSE Tech: Trust us, it's unhackable. We built it with our reliable DRM 2 encryption code, and we've told the puny Earthlings not to publish exploits...

        Oh, I see... So the aliens were just enforcing the DMCA. Now the whole movie makes sense! Thanks!

      • Right after this movie came out there was an awesome security alert e-mail going around. (There's a bug in the BGSs [Big Green Shields]that allows even primative lifeforms...])

        Anyone have a copy?
    • I can't believe they used that scene from Independence Day as an example. It's the worst, most banal attempt at science fiction that hollywood has ever made.

      Actually, with the way IT has been heading, I thought that scene was quite realistic; it might well happen in some distant future - with us humans as the invaders, of course.

      How long til our targets 'sploit our mighty WindowsCE3000 mothership?

    • Well, at least the JP scene made a LITTLE sense...and the book actually had a segment of realistic-looking code...
  • Hasn't anyone learned from the mistakes of A.C. Clarke and his predictions? I'm quite sick of it. I don't need Ray Kurzweil to tell me to hold my horses until some arbitrarily drawn date - I'm patient enough to wait for it. Worse, the promises of "hard" A.I. are scientifically unsound to begin with.

    Also, why can't modern day prophets realize that the next big thing probably hasn't even been guessed at yet. The vacuum tube, computers, transistors, etc. Ray wasn't reading old sci-fi pulp mags about Moog-like synthesizers, they more or less appeared on the scene. Now Ray sells digital synths. Real visionary.

    • > Hasn't anyone learned from the mistakes of A.C. Clarke and his predictions? I'm quite sick of it.

      I'm still waiting for that technology that's indistinguishable from magic. When it hits Radio Shack I'm gonna be the first kid on my block to get it, and then I can fit a brim onto my dunce cap and pass myself off as a wizard.
  • "Futurists" (Score:3, Interesting)

    by SeanAhern ( 25764 ) on Saturday October 20, 2001 @05:10AM (#2454208) Journal
    It has been said that so-called "futurists" oversell the short term, and undersell the long term.
    • Re:"Futurists" (Score:2, Insightful)

      by dreamquick ( 229454 )
      I'd argue that futurists envision, the inventors read the work of the futurists which inspires them to create something similar, and then politics and money spoils the wonderful symetry of it all...

  • by Anonymous Coward on Saturday October 20, 2001 @05:10AM (#2454209)
    ... is only ten to twenty years away, people!

    (I'll go read the article, now, with low expectations.)
  • Next up. (Score:3, Funny)

    by Chas ( 5144 ) on Saturday October 20, 2001 @05:12AM (#2454215) Homepage Journal
    • Barbarella: The Orgasmotron
    • Ice Pirates: The "Black" Robot
    • Masters of the Universe: The Cosmic Key
    • Superman III: Richard Pryor's Super Computer O' Evil
    • Sneakers: The Black Box
  • by epsalon ( 518482 ) <slash@alon.wox.org> on Saturday October 20, 2001 @05:16AM (#2454219) Homepage Journal
    Machine translation in 0-30 years?! As a person involved in these topics, I can say that 30 years ago people thought this could be solved in 30 years. We are today almost as far away as we were 30 years ago, and I think that there's no way of this being a realitiy in less than 100 years.
    To do correct machine translation you have to fully model the world and knowledge. Translation (for humans) is a tedious job, requiring a lot of research and artistic-like choice of words.
    I think that we will sooner have machines writing their own novles than full machine transtaltion. The problem is just too hard.
  • by evilviper ( 135110 ) on Saturday October 20, 2001 @05:23AM (#2454228) Journal
    The biggest problem with the Turing test is that it is completely subjective. The smarter of a person you are, the smarter the computer will have to be to give an accurate response. Obviously that trait is not one that reflects intelligence.

    Get someone dumb enough and they'll chat with ELIZA for hours at a time.
    • Ah, but as some (Daniel Dennett comes to mind) would argue, "real" intelligence is itself a matter of subjectivity (and I dare you to argue otherwise). If this is the case, why should we hold AI to a higher standard than we hold human intelligence?
      • Real Intelligence is not a matter of subjectivity, except in some fringe cases. Even the most idiotic human can be distinguished from an intelligent gorilla. That is precisely why a less subjective test is needed.

        The ability to solve problems, draw conclusions, faith, all are harbingers of intelligence. There is no doubt that a machine can be designed to warehouse conversations it can recall when needed, and learn new word definitions and such when needed. We have the technology to do that now, and it certainly wouldn't be a sentient being. That is the problem with the Turing test.
        • I wouldn't be so sure of your "definition" - many would argue that the is no such thing as intelligence there is only perceived intelligence. Examples:

          • Intelligence can depend on the environment: Is a spider intelligent? Spining a web it's amazing, stick the thing in a bath tub and it doesn't look so smart.
          • Intelligence can be social: is an ant intelligent? Not by itself but ant colonies perform some pretty amazing feats.
          • Intelligence may depend on other knowledge: A chess grandmaster may play a very strange move near the beginning of the game which looses him the game. Why? He took a calculated risk and it didn't pay off. Was he dumb? No, you say. What if it wasn't a chess grandmaster but Joe Blogss from down the street - yeah THAT was a dumb move...
          Perception of intelligence is about being seen to to the right thing at the right time.

          Regarding the second point - this gets to the heart of the Chinese Room Argument [utm.edu]: can intelligence (I would distinguish "sentience") be "built" or "must" there be something more. Was deep blue [ibm.com] intelligent? Searle would argue "no". Some would argue "Yes, In the chess domain". There was nobody on the planet it couldn't teach somethign about chess and (to an extent) explain those choices. Many AI researchers weren't happy about deep blue because it basically used very fast search and no fancy reasoning. But hey - that just shows that there's more than one way to solve a problem IMHO...

          • My whole point (stupid human, smart gorilla) is that there is a huge difference between something that will be percieved as intelligent, and a sentience.

            When do you think computers are going to get to the point that they question their own existance? Obviously something like that is not required in the Turing test. Being self-aware, or having the urge to explore and learn, are traits of intelligence, but are not taken into account in the Turing test.
        • Real Intelligence is not a matter of subjectivity, except in some fringe cases. Even the most idiotic human can be distinguished from an intelligent gorilla. That is precisely why a less subjective test is needed


          from whose point of view? an idiotic human is still intelligent (very much so. just not to your standard.), but look at it from a similar indivdual's point of view. Intelligence is very much in the eye of the beholder: it is subjective. Everyone has a different standard of what intelligence is; its something we all intuitively understand, but it is very hard to pin down. Dennett's way of pinning it down was simply to posit that if you would attribute some aspect of intelligence to it, it must be intelligent. How can we know other , humans are intelligent? We know we are, looking through our own eyes at the world, but there's no way we can just crack open someone's skull over lunch and find the inteliigence organ. No, we must infer from thier actions and reactions whether or not they are intelligent.


          the Turing Test is simply a formalise version of the task we apply every freakin' day: determining whether we are dealing with something intelligent or not based on inference. The trick is, some of our inferences are based on appearance, which has nothing to do with intelligence; this must, then, be factored out, and so on...It's really quite slick when you think about it some.

    • So how's ELIZA doing?


      Think about what you've said for a minute. I'll assume by the syntax of your sentence that you're young, and so I'll give you the benefit of the doubt. Your argument against one, of many, of the seminal ideas of someone with intellectual prowess of Alan Turing will not cut the mustard in the world of AI research, I'm afraid. Obviously? What is obvious about your hypothesis (that the Turing Test is completely subjective)? And how do you move from your hypothesis to your conclusion, ie., that the smarter of (sic) a person you are ..., without any observation or analysis of results.


      The biggest problem with your hypothesis, after reading your conclusion, is your lack of observation and analysis.


      The scientific method does work.

  • by joenobody ( 72202 ) on Saturday October 20, 2001 @05:25AM (#2454230)
    The difference in replies: the CTO says some things have a chance of happening and gives a shot at when.

    The geek says it will all happen, it's just a matter of time.

  • by gusnz ( 455113 ) on Saturday October 20, 2001 @05:34AM (#2454237) Homepage
    ...everyone will still get 99% of their predictions wrong :).
    • everyone will still get 99% of their predictions wrong...
      ... but they will only mention the 1% that they got right to the complete astonishment of their audience.
  • Uh-oh... (Score:2, Funny)

    by gusnz ( 455113 )
    From the first article:

    Concept: Using the brain for information originally stored elsewhere, possibly encrypted, or indeed upgrading human memory using plug-in chips, PC-style.
    "Encrypted"? Suddenly the DMCA brings a whole new meaning to the term "thought crime" :).
  • AI (Score:5, Informative)

    by Black Parrot ( 19622 ) on Saturday October 20, 2001 @06:05AM (#2454251)
    First, from Part II:

    > Concept: The ability for artificially intelligent devices to feel emotions.

    It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?

    Emotions can very clearly lead to inappropriate behavior. Granted, there may be times when emotions lead to positive behavior. But do they ever lead to positive behavior that couldn't be programmed into an AI without emotions? Unless that's the case then emotions are something known to be dangerous and not known to be useful, and therefore should be avoided as a life-threatening bug.

    Granted, it may be a fact of nature that "intelligence" (whatever that is) is impossible without emotions. But unless/until that has been demonstrated, let's keep emotions off our wish list.

    Now back to Part I:

    > Concept: The idea of a computer becoming so complex it can understand, reason, listen, speak and interact in the same way as a human, including using deception and self-deception.

    > Now we have: Machines that learn, software that breeds/replicates. 'Narrow AI,' i.e. computers that can perform 'narrow' tasks that previously could only be accomplished by human intelligence, such as playing games (e.g. chess) at master levels, diagnosing electrocardiograms and blood cell images, making financial investment decisions, landing jet planes, guiding cruise missiles, solving mathematical problems and so on. Currently exponential progress curve showing no sign of slowing down.

    First, as with emotions, I dispute the desirability of AI agents that can knowingly deceive themselves and others.

    Second, I'm not convinced that much of the laundry list in the second paragraph qualifies as "intelligence" instead of merely "appropriate algorithms". (Are we going to have to call MATLAB an intelligent agent because it's good at certain kinds of math problems?)

    Third, I am amazed that they would say that we're making "exponential progress" in anything that might reasonably be called "AI". My games don't seem to ship with AIs that are "exponentially" smarter than the ones that shipped five years ago. Dish up some facts, please!

    That said, here's a link to a paper [160K PDF] [umich.edu] that someone turned me on to recently. It's from a talk some AI researchers gave at a conference last year. They start by asking where is all the cool movie-style AI, and answer with the observation that no one is working on it. Their proposal to remedy that situation is that AI researchers should get involved in game AI, because many modern games require agents that are more "intelligent" than the common solve-one-problem stuff that has been coming out of the AI community for the last few... decades.

    I think the authors of that paper overstate their case by calling game AI agents "human level" AI, but at least it's a step in the right direction. It's a bit of a light-weight article, but it's an easy read. And it would be way nice if 2/3 of the world's academic AI researchers started working on gaming applications!
    • "It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?"

      Remember Bomb in Dark Star?

    • >Second, I'm not convinced that much of the
      >laundry list in the second paragraph qualifies
      >as "intelligence" instead of merely "appropriate
      >algorithms". (Are we going to have to call
      >MATLAB an intelligent agent because it's good at
      >certain kinds of math problems?)

      This reminds me of something I read about AI: us (humans) constantly move the threshold of intelligence according to how far we've gone.

      ie. first we had chess-playing AI. When that was done, it was said "this is not AI, but if it can beat a master-level player, then...". It beat the master-level player. "Still not AI, but if it beat the world champion...". It beat the world champion.

      As AI nears 'wet' intelligence, the definition of intelligence drifts farther away. I'm wondering if it's too late to realize true intelligence is here when that happens...
      • I think that nature of the problem with chess-playing programs not being AI is the way in which most of them work. Chess has a set number of possible combinations of moves, as hardware gets bigger/faster, it is possible to sort through more and more data in the same period of time, meaning one just needs a large library of moves to beat any human player.

        Of course, many human players use this same strategy (memorize positions/openings), but cannot come close to the machine's ability to memorize. The human player cannot play mind games with the machine, putting them at a disadvantage (though note that the machine may seem to psych out human players!).

        All this really says is that the ability to play chess well is not a RELIABLE measure of intelligence.

      • by zzyzx ( 15139 )
        The reason why that's happening, is that we have yet to have a program that can accurately simulate understanding the input it's being fed. It's been 46 years since Eliza was written, and still any AI program can be fooled by the most obvious tricks.

        The reason why the threshold of intelligence keeps changing is because all we're learning is what problems can be solved by brute force. If I recall my half forgotten game theory information correctly, any finite game has a set of unbeatable moves. With enough brute force, chess can be beaten.

        Why is chess not AI? Think of this. Imagine a chess tournament where, at the very beginning of each game, the rules are randomly changed. Pawns can move one square diagonally. The board wraps around like the tunnels in PacMan. The goal of the game is to capture both rooks. The only programming changes allowed to the AI are inputting the new rules. Who wins the game?

        That's the difference between understanding and memorizing.

    • Re:AI (Score:2, Interesting)

      by hackstraw ( 262471 )
      It's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?

      I do not believe that we would want a warehouse full of smart bombs with hurt feelings any more than we would want people with hurt feelings being responsible for the deployment of such bombs. The military screens for such things through various personality and performance based tests.

      However, I do believe that emotions are important to AI for one simple fact. For true AI to work, the computer must want to do something, not just react as programmed. I came upon this when I first played with an ELIZA program. I mean it could "learn" an "appropriate" response by asking what it should say if it had no prior knowledge of a topic, but the program never wanted to learn, it had no motivation. In fact, if it asked for a response, it would simply sit there waiting indefinately, whereas any living thing above a plant would go about doing something else.

      Now, putting human emotions into a computer might not be the best of things, but what definetly needs to happen is some kind of feedback loop to positively and negatively reinforce the machine so that it has some kind of "desire" to change its behavior. Then, we will have true AI, and not before.

    • t's not at all obvious -- to me at least -- that we should want AIs to feel emotions. Who wants a warehouse full of smart bombs with hurt feelings?

      Of couse you wouldn't - you'd want your bombs to be all gung ho, eager and agressive.

  • This is an intriguing article because not only are they rating the credibility of recent SF ideas, but they're trying to attach a timeframe to the ideas based on what we have today.

    Whats even more interesting from my point of view is to ask the question: If you consider the actual applications, perhaps even getting very specific, do the ratings and timeframes still match up. Obviously rating credibility is subjective somewhat, anyway. And similarly, trying to attach a time frame to technology is still at best an educated guess.

    But if looked at from the point of view that a specific application of one of the SF technologies would have significant beneficial impact on quality of life for a lot of people, then perhaps the time frames change out of necessity. What if we figure out that by uploading a short program into the brain we could signal synapses, neurons and such to keep seratonin levels at a therapeutic level for people suffering from depression and give them a much better quality of life. Thats just a rough example I throw out, but I bet there are some serious applications seen in technologies of the future that will actually boost the timeframe through basic need.

    Anyone else think so? Comments?

    Just my $.02

    WBGG

  • Babel Fish (Score:3, Funny)

    by Jace of Fuse! ( 72042 ) on Saturday October 20, 2001 @06:17AM (#2454263) Homepage
    If they want a Babel Fish, they're going to have to make sure they have the Towel, the Pile of Junk Mail, and a bunch of other crap.

    I eventually got mine, but I hope nobody asks me how I did it. I don't remember and I'm not about to figure it out again!

    If one really cared, they could just do a web-search for a walk-through. I'm sure one is out there.

    30 years for a Babel Fish. Shesh.
    • I admire a man that makes obscure Infocom subreferences on a board full of 31337 C0d3r5 all born after 1987. I salute you, i remember the endless paragraphs of alien gibberish you had to wade through until you worked that one out...
  • William Gibson (Score:2, Interesting)

    .........Neuromancer was the uncanniest thing i've read. He coined 'cyberspace' in 1984 (or was it '86?) he invented (or at least, popularized the idea in sci-fi), the "matrix" WAY before Keanu Cheese starred in that overrated film, and his characters were ultra cool examples of the "wired" human with organic-machine interfaces (that razor-girl was cool). Even the narrative style, with the dense but ambiguous portayals of the gritty subcultures of vast metropolises seems futuristic.

    The guy was a prophet. Who knows what strange visions from his novels have yet to materialize?

    • The guy was a prophet. Who knows what strange visions from his novels have yet to materialize?


      Big, evil corporations trying to conquer the world and maximize profits no matter what the human cost? Already got 'em!

      As for the rest of his stuff it's rather naïve and dubious if fascinatingly surreal. Makes for great material on a long flight or on the john but I'd hardly call him a visionary prophet of TEH FUCHUR. Maybe when we really do have a matrix...
    • Re:William Gibson (Score:2, Interesting)

      This is offtopic, but what the hey. I just recently read Neuromancer on the recommendation of almost every geeky friend that I have, and I was stunned. I was stunned that a book that won so many awards and is beloved by so many people turned out to be one of the worst sci-fi books that I've ever read. For point of reference, I've probably read ~100-150 sci-fi books, lifetime, and my faves are pretty standard (but not recent): Dune series, various Heinlein, Clark, Bradbury, and Assimov. In Neuromancer, I found the characterizations, character development, plot, pacing, development, voice, and dialog to be very poor. The narrative was acceptable more than it wasn't, but I don't think that's really a complement. I did actually finish the book as I assumed that that something interesting *had* to happen eventually. I can accept that when it was published, just the idea of a noir near-future was interesting, but to me as a modern reader it just comes off like an admirable first attempt by a capable high school student.

      Now, I'm guessing that a fair number of /. readers liked the book and may try and defend it, so before you do, keep a couple things in mind. 1) I'm attacking the book's literary merit (or lack thereof). 2) I'm stating that a book lacking in literary merit and lacking ideas that are new to me (*regardless of whether or not they were new to somebody else at some other time!*) ranks very low on my "quality metric for sci-fi books."

      Now, if you feel compelled to argue that Neuromancer does, in fact, have literary merit, then please be prepared to answer a few things: 1) Describe the character backgrounds (ie, information about the characters that occured prior to the events of the work) for Case, Molly, Armitage, and Riveria, in detail. 2) Explain how this correlates to each character's motives for furthering the plot. 3) Explain how the protaganist has grown over the course of the book. 4) Quote us one section of dialog that you found to be particularly well done. I assert that the answer to #1 will comprise about a paragraph, which for four major characters is ridiculous. This, in turn, relates to why #2 is easy to answer, and very, very shallow. I think the answer to #3 is to mumble, "Well, there must be *something*," while flipping through the book. For #4, you may find something. I'm curious as to what it is. I'll probably disagree with you, but then we can agree to disagree, I hope! I also hope that I've managed to substantiate and clarify my position sufficiently to avoid being modded a troll, as this isn't intended as such. If you like the book despite these shortcomings, well, to each his or her own :)
      • I'm not going to defend it, rather say what I like about Neuromancer and the later book in the series. (If you want to find out #2 you'll just have to read the other 5 books.)

        First I found the style of writing fascinating. You spend the first part of the book confused from different plots that don't seem to be related. (Which most likely is a way of describing the chaotic life of the Sprawl.) As the story progress more order can be found amid the chaos. That is what I like about the series.

        And pershaps the ideas are not that new anylonger, they have been copied for almost 2 decades by now, so why should they be?

        Secondly a lot of really good Sci-Fi literature is rather poor, in a strictly literary meaning. For instance "A brave new world" is IMHO a very poor book, but the ideas and setting is the important part. The plot is extremely silly and unimaginative, that doesn't matter however. We are basically given a tour of Mr Huxley's vision, and at least I find it a lot more credible than 1984. (Although 1984 is a much better book, from a literary pow.)
    • hey now lets not forget that keanu reeves did one of gibson's movies. admittedly "Johny Mnemonic" wasn't all that good but thats not keanu's fault. i mean it was a dumb movie.

  • 1984 seems to be drawing ever closer... especially since September of this year.

    If you have no idea what I am talking about, start here [ou.edu], or just jump straight to this summary [k-1.com].

  • I think he gives all scenarios a credibility rating of 10/10, even the Independence Day scenario. This guy must live in a different world. However, I've bought his predictions book, and I plan to read in twenty years time or so. It will certainly be funny.
  • Did it piss anyone else off to read how Johnny was apparently uploading data "into his brain". No. Read Gibson's short story. Listen to what is actually said in the movie. Johnny was uploading data into a chip in his head that was intended to treat his autism. By misusing his chip he could transport data that would otherwise be detected by law enforcement or pirates. What happened in the story was that he misused the chip for too long and his autism wasn't being treated, so he was experiencing symptoms (he was losing childhood memories at an alarming rate).
  • the future (Score:1, Funny)

    by Anonymous Coward
    why is the future so hard to see when:
    a: moore's law,
    - which btw, does not simply reflect the speed of integrated circuts, but physics, biology and nanotech in general. b: exponential growth of internet population and traffic

    2002:
    - 4gz processors
    - 1-2 gigs of ram
    - wireless networking explosions, bandwidth jumps to 10 mb/s
    - p2p software explosion
    - massivly multi-player rpgs gain huge grounds
    - physisists and biologists play with .1 micron sized objects
    - genomics will be twice as big as it was in 1999 etc etc
    - population of the internet will exceed 1 billion
    - internet traffic will continue to tripple every 6 months
    .
    .
    .
    2003/2004:
    - 8 - 10 gz processors,
    - multiple processors become standard in PC's
    - 1/2 the population of the world will be online
    - Open Source will have overtaken the development of comercial software
    .
    .
    2005:
    optic cpu, mother, and internet backbone fuse,
    creating an "inflexion point" in which
    millions of computers around the world become
    the worlds fastest super computer.


    2020-
    artificial brain implants finaly teach me to spell!
    • I have serious problems with your reasoning. My biggest beef is that Moore's observation has lazily and incorrectly been labeled a law, when it obviously is nothing of the sort. Nature doesn't give a rat's arse about the speed of computers. The continued doubling of processor speeds is dependent on human engineering, and there are no guarantees that new advances will arrive on schedule.

      moore's law, - which btw, does not simply reflect the speed of integrated circuts, but physics, biology and nanotech in general. b: exponential growth of internet population and traffic

      See? This is just what I'm talking about. Moore came up with his observation years before any talk of nanotech, biochips or other buzzwords du jour. There are many promising technologies that may take us beyond current lithographic techniques, but there is no reason to believe that growth will conform to any 'law'. The pace may accelerate, or it may stall and plateau for a number of years until the next breakthough. Advances will come about because of human ingeniuty, not some over-hyped statement of current general trends.

    • I was just thinking about the storage market. Somebody on ZDNet was going on about FDS or whatever that new plastic shit is.
      Anyway, I let him have it. I was like, fuck that mechanical/optical stuff. Hard drives are already clearly in the path of the speed bullet. Talking about new optical whatever is cute and whatever, but RAM is where it's at. That's the whole problem with the RAM market, they can't stabilize because they're feeding off the broader tech trends in semi. It was never supposed to be the way it is already and it could get a lot steeper. When RAM keeps stepping up with expontential growth as circuits shrink, it's hard to avoid. Especially when it's the only thing besides CPUs that can really take advantage of fiber and fast switches. You know, even fast ethernet can tax hard drives and the 10GhE standard is being finalized this year. D-Link 1G downstream switches are only three hundred bucks.
      How about some futurists being interviewed by a journalist who can think of a halfway reasonable near-term forecast.
      Perhaps then it would be a political interview though more suited to people looking at the real politics of the near term techno future. Hmm, how about that russian hacker dude what does he think about a pack of 20X100Gig Ram sticks with a fuel cell drinking vodka? How about some russian BBS hacker crazy guys. Let's have an interview with someone from the Top50 or Astalavista about what it would mean to have fiber, laser or a higher speed wireless direct to your CPU/RAM banks.
      We have to get past that point to get to the even better stuff and it's out there.
      • Fer instance.
        If we could really jam on such terriffic amounts of bandwidth for insignificant tasks, it may enable new hardware.
        I don't know if we have any rapid prototype deverlopers in the house this evening, but having a printer for objects is definitely real and must be an application where exponential data speed increases could lead to dramatic increases in productivity and perhaps the use --or in this case, use and production-- of materials that are prohibitive with current technology.
        Let's hear about nano printers!
        The catch is that the notion of intellectual property will have a to change a lot before the kind of bandwidth that might make really out there science fantasy come true will be available. The hardware is forcing the issue on software and it's been going that way for a long time. It's getting faster and faster, but it's not like it has to be all bad. Perhaps after intellectual property, real property will become ubiquitous.
        Who would complain then? You could still complain all you wanted and probably live out some wild fantasys about taking out your frustrations too, but there wouldn't really be that much to complain about outside your bad dreams and personally imposed limitations. There's certainly plenty to complain about right there for most people though so we don't need to be scared of a future where nobody complains. Nonthless, it's not hard to imagine a race of satisfied humanoids from the strictly technological perspective. It's gettin' there that's gonna be a big fight apparently if these RIAA, MPAA clowns are signs of things to come. We'll see what happens. One thing about ubiquitous high speed networks is they sure facilitate communication. It's all negotiable once we can see the advantages of working together for a brighter future.
  • Some thoughts (Score:3, Interesting)

    by kreyg ( 103130 ) <kreyg@shawREDHAT.ca minus distro> on Saturday October 20, 2001 @06:35PM (#2455364) Homepage
    Using the brain to store digital information:
    The problem is less one of interface than it is one of reprogramming neurons. While this might technically be possible, is there going to be any sort of information density advantage? Human memory has some really nice lossy compression, but that would make it a bad way to store digital data.

    Computers "understanding" and "speaking" human language:
    I think the only thing we've really learned in the last 30 years is that the problem is a lot harder than we thought it was 30 years ago. There are a multitude of problems, from simple parsing to having a large enough database to understand context. That, and we really don't know what problem we are solving. A speech interface to a database would seem to be to be a useful tool - "what is the weather going to be like today?" opens up the appropriate web page. "Find me a good price on a 1997 Honda Accord" hits the search engines, finds a few dealers in my area, and gets me some pages to view. We don't even have anything this sophisticated without the voice interface. (Speech-to-text + text-to-speech + Google) is not tons better than Google. Yet, we expect a program with the depth of knowledge and subtlety of reasoning that a human posesses. My own version of the Turing Test, "I'll believe it when I see it," suggests to me the system that can pass the Turing Test is a LONG way off.

    Software as a weapon:
    OK, ID was a poor example - I know I'm 1337 enough to reverse engineer alien technology in a matter of minutes and write a virus using a Mac, but that guy? But really, software as an weapon is only useful against those who use software, and only when that software is of critical importance. Even North Americans aren't THAT reliant on the 'net, although it might be wise to take precautions before we wire all of our brains together...
  • "Fools will come to my house and ask me to stop pitying them. It will be raining and I will not be home, so I will continue to pity them." - Mr T

  • How could Jeff Goldblum's character in Independence Day possibly be familiar with the Operating System the aliens were using. There is no such thing as a virus that affects all platforms.

    That little fact has protected Macintosh users from most of the malicious code out there.
  • Blonde bombshells in sparkly catsuits, on alien worlds, capturing virile yet helpless Earth men to be their love slaves.

    Why aren't more of our scientists working to makes this a reality?!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...