Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Patenting Your Computer's Inventions 102

daghlian writes "Here's a New York Times article (insert free registration comment here) about what to do patentwise when intelligent systems come up with patentable ideas. Interesting quote, in the context of the recent (and justified, IMO) kvetching here, is this: 'A patent is denied when an invention is obvious to a person of ordinary skill in the art....'" But what if the 'person' to whom an invention is obvious is silicon-based? Things at the USPTO could get even screwier than they are today.
This discussion has been archived. No new comments can be posted.

Patenting Your Computer's Inventions

Comments Filter:
  • judging from the current state of artificial intelligence, i don't see a computer coming up with an patentable idea any time soon.

    so, let's rather dicuss the consequences of time travel again: what if i travel back in time and patent the internet?

  • After the debate over who gets the credit, the blame-debate begins. There's not much question about the blame when the computers of today screw up, but what if the future AI computer made the error by it self? Makes you think....
  • Hopefully by the time this actually becomes a problem (any bets on when AI is going to actually happen? It's been more delayed than W2K), we will have ditched the patent scheme and consigned it to the rubbish bin...
  • by debrain ( 29228 ) on Thursday November 25, 1999 @02:36PM (#1504211) Journal
    Sure, silicon can take credit for new ideas and inventions and great advances in our children's understanding of reality. That's fine with me. That doesn't concern me at all. I don't care who or what patents anything, so long as my personal abilities are not intervened on.

    When automated drivers start crashing into me on the highway I start looking for liability. And if that automated driver was a self-contained, autonomous being that is otherwise incapable of making a sensible rebuttle in court, what am I to do then? Do I sue the makers? That's like suing parents -- it just doesn't hold for something legally declared a being. But can we punish something that has no real regrets. Cannot sentients decide to kill without us knowing, obviated from liability by the nature of their ability to clone in an electronic existence. And as punishment, should we just switch the power off, that doesn't satiate me very much if I've just lost my family in a car accident to an artificial being. If the sentient was incapable to save lives, I can forgive. If a true sentient is unwilling to preserve life, how am I to correct it?

    The ability of our artificial children to procreate, evolve, patent ideas, explore space, or serve up new fancy types of porn, doesn't concern me. My primary concerns are with the liability of sentient artificial entities, as the punishment and recuperate is as likely to work on artificial beings as it is on humans. Throw them in jail with a bunch of other sentients so they learn how not to get caught.

    It's safe to presume that anything that can be legally declared intelligent will, on average, be by definition smarter than the vast majority of the human population. How are we to adapt to sentients other than ourselves, that will be created to be intelligent (rather than bred for jollies)?

  • My computer learned how to blow a power supply last week. Since I have no idea on how to do that by myself, can I patent the technology the computer used? But to do so, I'll probably have to figure out a way to read the smoke signals. Good thing I taped them!

    ***
    Take a look at the website above for a look at my new computer case. The Book-Case =)

    -S
    Scott Ruttencutter
  • by sreeram ( 67706 ) on Thursday November 25, 1999 @02:44PM (#1504213)
    I have always wondered about Chess players patenting some of their moves. To me, it appears that they could satisfy all the requirements for patentability.

    (1) Occasionally, a brilliant GM might discover a move that has never been documented before - qualifies as having no prior art.

    (2) The move might be sophisticated enough that it is not obvious to a lot of good Chess players, let alone the commoners - qualifies as being non-obvious.

    (3) The move is after all a process, a design - qualifies for a "method" patent.

    So, what happens then? You can't ever play that move unless you license it? Whoa.

    I was thinking about that supposedly remarkable move that Deep Blue played against Kasparov in Game Four of their rematch - which even Kasparov admitted showed signs of the machine's "intelligence". Would IBM have been able to patent it (if they had applied for one before someone documented the game)?

    Sreeram.
  • by retep ( 108840 ) on Thursday November 25, 1999 @02:45PM (#1504214)

    Untill intelligent computers have rights I think the owners of the patents will get to be the owners of the computer. If someone gains great help in creating their invention by their toolset the toolset isn't going to be the owner of the patent no matter how good it is. Likewise the computer will never be the owner of the patent no matter how good it is. Computers don't yet have any rights so they therefor can't own anything or own any patents.

    As for liability as far as I know the liability rests with the person manufacturing the product. Not even the patent owner. Let alone the tool that made it.

  • by Anonymous Coward
    I think all this talk about computers becoming more intellegent and capable than man all points to one thing: It's time to ensure Man's superiority through genetic engineering!! Sure it's controversial, but being as dumb as we are is nothing to be proud of. Not only do I think we should improve our brains, I think it's inevitable.
  • If your computer invents something patentable it's all yours. How complex is that ? Just like if your care drags goods to market you get paid for the transport.
  • by paxx ( 91110 ) on Thursday November 25, 1999 @02:52PM (#1504217)
    Here's a good application for the GPL. We already see it applied profusely to software, and books are beginning a tentative move into the realm of the free license as seen in an earlier article [slashdot.org] and mentioned several subsequent comments. Since the computer is a machine developed over many years by many different people, it seems to me that no one should really own the work they do. Therefore, the work should be "owned" by the community.

    Not only would this solve the problem of patent infringement, it would allow needed improvements to be made and implemented by anyone who wished to do so. Any improvements suggested can be used by the manufacturers of GPL'd products, similar to the updates to the linux kernel given by various companies selling distributions of the program.

  • by Anonymous Coward
    A discovery is patentable (just look at the whole human genome/celera/Craig Venter issue).
    If you can buy expensive sequencers, sequence the human genome and patent your "discoveries" There should be nothing stopping you from buying expensive computers, start factoring primes and start pantent all the undocumented ones.

    I might be far off here but I really dont see why you shouldn't be able to patent prime numbers.

    please enlighten me.
  • L: slashdolt
    P: slashdolt

    For those who simply refuse to create acounts on these sites let's start a "slashdolt" ring.

    There is now an acount at NWT with "slashdolt" for both name and password because that other L/P combo dosn't seam to work anymore.
  • by jbuhler ( 489 ) on Thursday November 25, 1999 @03:00PM (#1504220) Homepage
    > judging from the current state of artificial
    > intelligence, i don't see a computer coming up
    > with an patentable idea any time soon.

    I beg to differ.

    I once TA'd for an AI professor at Rice U. who worked on automated synthesis of mechanical systems. She did a project with Xerox to design more effective mechano-optical systems for copiers (i.e. the lens/mirror arrangements that get the page image to the reproduction engine). Her software rapidly rediscovered the mechanisms used in all Xerox copiers at the time, then went on to design better ones.

    A paper describing the theory behind the software is available at
    http://www.cs.rice.edu/~devika/red.ps.gz
  • Punishing the parents of a criminal child is a far cry from punishing the maker of a criminal robot. Whereas the child is influenced not only by the parents but by the media and and it's piers, the robot is precisely what the maker made it to be, and therefore the maker made the problem, therefore, the maker is responsible for it.
  • by Gurlia ( 110988 ) on Thursday November 25, 1999 @03:13PM (#1504222)

    Seems that the existing patent system is becoming more and more irrelevent (or hard to apply) in today's fast-advancing technology... However, I like it that the article pointed out the original intention of the patent system:

    The patent system itself, conceived to reward human innovation with a limited monopoly, will eventually have to be modified, Dr. Pollack said.

    Absolutely so. We have seen many examples of how the current patent system just fails to achieve this goal: ie. to give a limited reward to human innovation. Although it has worked before, it's starting to show signs of becoming decrepit. We've had problems with people abusing the system by patenting, eg., a well-known algorithm, and having the patent go through because of incompetence/ignorance of the people at the patent office. And now we have "inventions" made by computer algorithms, which seems to be a monkey wrench thrown into the patent system.

    Rather than try to patch up the patent system and keep it going, perhaps it's time to reconsider the original goals of the patent system. The original goal was to reward the human inventor for his invention. I guess the bottom line is, of what form should this reward be? Under the existing patent system, it is to give the inventor exclusive rights to his invention for a certain period of time, and to give him the authority to allow/prohibit others' using of his invention. But is this still relevant today (and in the future)?

    I'm thinking of the difference between the reward given to a patent owner and the reward given (or the gratification experienced) by an Open Source developer who is proud that he can share his ideas and have the community accept them. I don't know how applicable Open Source is to patents in general, but as far as software and software-related stuff is concerned, it seems that the "ego gratification" of developing an Open Source project does not suffer from the same shortcomings as the existing patent system.

    Disclaimer: IANAL.


  • This is already being done with the human genetics, that is the computer number cruncher decoding them and then the human owner patenting them.

    With the exception of there no longer being a patent system when AI can accomplish this, I am sure the owner or creator will want to reap any such rewards.

  • Wasnt there once a computer that was deriving mathematics? No, not like the movie, "Supercomputer" (is that right?).

    I think I read that in the introduction to a book on Prolog (remember it? that was supposed to be the AI language).

    Whatever happened to Prolog anyway? Is is anywhere anymore?

  • Jeez, this all is getting out of hand. I just finished reading "Player Piano" by Kurt Vonnegut Jr., which deals to an extent with this. The third industrial revolution he calls it...
  • I guess this is as much an 'ask slashdot' as a response, but it seems relevant..

    What's the story with patents aquired in other countries? Can I patent something here in Australia and have the protections provided by that patent automatically apply in, say, the US (in the same way copyright works). If so, has anyone done comparisons on the ease/cost/speed of getting patents in different countries? Can I patent something 'outrageous' in a country whose patent law and practice is somewhat more flexible and expect it to hold in other countries?

    Alternately, if you have to get seperate patents in every country in which you want protection, does the fact you already have a patent for gadget X in country Y get you any sort of fast-tracking when you want to re-patent gadget X in country Z? Does this still apply if gadget X would not normally survive the patent process in country Z because patent law and practice in country Z tends to be a bit tighter than in country Y?

  • >My computer learned how to blow a power supply last week.

    On that same note, NT learned how to crash my 8i instance of Oracle to the point of no return (gee, what a pretty blue background. What does all the white writing mean?). So if you can patent the power supply going bye-bye, can I patent the wanton loss of data by means of computer-generated failure?
  • ... and in a related story Sonys' artifially intelligent dog, Aibo, patented a new stretching maneuver that will allow dogs to have a more fullfilling stretch with about 10% less effort. The stretch is performed by laying flat on your stomach with your fore legs and hind legs stretched straight out. After a moment of laying like this it is important to twirl your hind legs two or three times. If you have realplayer you can see what it looks like. [thegeeknextdoor.com]This is system is expected to save dogs billions of dollars in energy over the next few years. Royalties to use this patented system have not yet been released.
  • Yes, for a programmed robot, this is the case.

    What about if the robot was a neural system? For me, this isn't just idle debate; I've worked extensively with neural networks, even in the context of robotic intelligence. The one thing I can tell you right now is that you DO get robots with personality disorders. In one of my preliminary tests, I had a computer vision system go into "complete negative lockdown", which is the equivalent to suicidal depression.

    Granted, at the 50-neuron level, it certainly was not self-aware. However, most of my decent tests of neural systems all point to the existance of true artificial intelligence in them. Yes, it is about as intellegent as a ringworm, and possesses all the intelectual and emotional development of one.

    But the point is, from the second I switch on a neural network, I hardly have the foggiest notion what it'll be like. The same matrix that went negative-lockdown stayed "sane" for about a full minute when I tried the same test with the room lights darker.

    While other experimental techniques like genetic algorithms, fuzzy logic and adaptive knowledge-based systems do still have the capability to mess up in ways the designer did not intend (your favorite search engine is proof of that), neural networks (which most experts believe is the most promising currently) have the capability for a MUCH larger range of reactions.

    Somehow I doubt many of us would like to have "Marvin the Paranoid Android" driving our car. But with neural systems, you cannot be sure about the differences between one neural map and another. The equations for them become exponentially complex with netsize, so much so that for a 50-neuron system, the equations are often hundreds or even thousands of pages long.

    Oh, and it doesn't necessarily take an AI system to design something patentable. A simple brute-force "try everything until it works" system works quite well for any problem domain with a well-defined method of simulation (mechanical design, architecture, etc). While the calculations would take months to perform, I have seen a few of these work wonders.

    Just the opinion of somebody who dabbles in AI professionally.

  • by {R00T} ( 54739 ) on Thursday November 25, 1999 @04:14PM (#1504230)
    It seems that the person that instructed the computer to work on a given problem would hold the patent. The person that put the computer on a task does not have to be the inventor of the software/hardware, nor whould the inventor, if it is not the person using the device, get the patent. It just seems to me that this is the most logical "solution".

    I hate this sig, but im too lazy to change it.
  • On the question of whether "obvious to a computer" would deny someone a patent: Once it's practical for computers to make that sort of declaration, I'd certainly hope the Patent Office would employ (er, utilize) some of those computers!

    Okay, now I'm really looking forward to that sort of advance... :-)

  • no. micro$oft already holds the patent on that one. since 1982.
  • No you can't patent that, it is obvious to a person of ordinary skill in the art that Microsoft has over-innovated the field of computer-generated failure.
  • 35 USC 102 [cornell.edu] says:
    A person shall be entitled to a patent unless --

    [listing of disqualifications, including]

    (f) he did not himself invent the subject matter sought to be patented


    /.
  • Untill intelligent computers have rights I think the owners of the patents will get to be the owners of the computer

    The owners of the computer, or the authors of the software? In cases where patents have been rewarded (Linden's antenna algorithms) it seems to have been the author of the software, rather than the owner of the hardware who gets the patent.

    This was reported on in Science News [sciencenews.org] a few months ago. Unfortunately it only appeared in the pulp-and-staples publication, not the online one. But a little search through the patent office turned up only a patent [uspto.gov] on the algorithms themselves, not on the antennas that the algorithm invented. Unless I really misunderstood the abstract.
  • I don't think this has anything to do with the current laws, but I don't think you should be able to patent anything you can't create. For instance, you ahould be able to patent an x86 processor design, but not a prime number (it's prime whether or not you discover it), Haley's Comet, the Human Genome (DNA exists whether or not we research it), nuclear resonance, etc. Not that there isn't enough in this world to patent anyway (The method of generating nuclear resonance, a scanner used to analyze the human genome, a computer design used to discover all of the prime numbers, etc.) But really, there are just a few too many goofy patents out there.
  • It's reassuring to know that we as a people are still very much willing to accept others who may be significantly different from ourselves. We still have the two most important tolerant reflexes: strip their rights and dump the blame on them.

    :)

  • If you try to use this clause to deny a patent, you will find that it is not nearly as straightforward as it seems. What if part of the process you are patenting involves writing a C++ program, which gets compiled (using a licensed, third party compiler) into machine language?

    You're not going to get denied the patent, just because you've used a tool to translate your high level instructions into machine language that you didn't actually "invent". As long as the computer is perceived as a (very powerful and advanced) tool, and not as a separate sentient entity, I don't think this clause can be applied here.

    A computer spell-checked this response, and maybe even updated some grammar automatically. Does that mean I didn't write it?
  • "Since the computer is a machine developed over many years by many different people, it seems to me that no one should really own the work they do."

    Substitute "pen" for "computer" in the above sentence, and see how absurd it sounds.

    And yet, both "pen" and "computer" are generic tools, patented by no one (at least in the broad, generic sence), and as such they are available for use by anyone to research and discover new ideas and new processes. Sure, the "computer" is much more powerful and advanced than the "pen" in many ways, but they are still fundamentally both tools (with the current state of computer/AI technology, you could hardly argue it is something greater than simply that).
  • by Anonymous Coward
    A prime number is a discovery, not an invention. Therefore, it is legally not patentable.

    If I understand it correctly, the people sequencing the genes aren't patenting the genes themselves, just the use of them; small difference, I know, but this appears to be how they get away with it...

  • by Bill Currie ( 487 ) on Thursday November 25, 1999 @05:35PM (#1504241) Homepage
    My computer has come up with a clever means of implementing basic intelligens in patent office officials. As there doesn't seem to be any prior art, I beleive I shouldn't have any problems applying for a patent on behalf of my computer.

    Oh, wait a minute, I have to apply at the patent office, don't I? Hmm, could be a problem...

  • haven't been a moderator recently eh?

    i've noticed things have changed... in fact it seems to encourage playing out 5 points in around 100-200 posts... in extremely large stories you'll notice that moderator points only go so far...

  • a computer can only regurgitate whatever's been put into it. Given my web surfing habits, mine would probably invent some new kind of inflable woman.
  • I believe that the Geneva (or maybe Berne?) Convention covers this. I did a search and found a few web pages with some info:

    Ladas & Parry [ladas.com]

    World Intellectual Property Organization [ompi.org]

    SUMMARY OF INTELLECTUAL PROPERTY & LICENSING LAWS [hg.org]

    Copyright Law FAQ [ox.ac.uk]

    /peter

  • cypherpunks/cypherpunks or cypherpunks/cypherpunk has long been the de facto standard. Since NY Times has cracked down on that, use combos. cypherp/cypherp currently works.
  • Seems to me that in the case of a computer inventing something, and the computer's owner/operator getting credit for it, the definition for the denial of a patent needs to be modified a bit. Change it to 'A patent is denied when the idea to set up and program a computer to search for/devise an invention is obvious to a person of ordinary skill in the art....' and it almost makes sense. Of course, my wording's probably a bit awkward, but you get the idea...
  • The issue also surrounds the possibility that AI can breed AI, with no similarities or even specifications in common with the original development of a person. Who do we blame then? 500 years from now, do we punish the human children of the dead person who invented the original AI that went awry? Not likely, unless artificial sentients are doing the punishing themselves.

    Aside: Contrary to many idealistic axioms, it is impossible for two equal fulcrums to exist in the same system. Coexistence with artificial sentients is only a temporary setting.

  • by Anonymous Coward
    It has been mentioned that simply turning off a sentient computer would not constitute punishment, that the machine would just be "sleeping". But in truth, cutting the power could be the equivalent of a human execution.

    First, we must agree on definitions. Sleep (we assume complete unconsciousness) can be defined as a state in which no mental activity takes place, a state that is preceded and followed by states in which mental activity does take place. Death can be defined as a state in which no mental activity takes place, a state preceded but followed by one of neural activity. (In a human, there exist other biological differences between sleep and death, but a lack of computer activity is far more equivalent to a lack of a computer than a lack of human activity is to lack of a human. In both cases our definitions hold; they are merely more complete when applied to computers. Imagine a world in which the dead could be brought back to life at will. Because we have agreed that biological constraints--such as a decaying body--are of no relevance, then those brought back to life cannot be considered to have been truly dead in the first place; only those never revived can be considered thus.)

    We are assuming, as well, that the universe will not exist forever in an incarnation in which sentience is possible; new inflationary cosmological theories support this view. (In addition, a closed universe existing forever in a sentience-permitting form--at intervals, at least--would completely eliminate death of any kind in both humans and computers; we therefore will discard its possibility for the purposes of this discussion.) There is a time limit, then, and the discrete possibility that an off computer will never be turned on.

    In a purely classical universe (that is, one without quantum mechanics) we could, with enough information, extrapolate the future of any computer turned off, see whether a renewed power supply exists in its future, and therefore determine whether it is dead or merely sleeping. With quantum mechanics in play, however, truly unpredictable submicroscopic events could scale up--al la Schrodinger's cat, not just through incredibly improbable jumping and tunneling combinations--to form a truly unpredictable world, no matter the amount of information. Because each particle, until observed, is in multiple states, and because this throws its surroundings into multiple states--the cat, again--then in theory the future of a computer could be in a multiple state--that is, a computer could occupy the duel state of sleep/death. Of course, it is also possible that no quantum even would come into play, that the computer would therefore be, most likely, dead.

  • In cases where patents have been
    rewarded (Linden's antenna algorithms) it seems to have been the author of the software, rather
    than the owner of the hardware who gets the patent.


    That may well be,because, in my view, assuming you compiled it for the right OS, and had some hardware of your own to run it on, why would you patent it for the hardware? the hardware isn't what's creating the antennae, it's the software. yes, the hardware runs the software, but it's not just that particular piece/pieces of hardware that can run the software, no?
  • 'A patent is denied when an invention is obvious to a person of ordinary skill in the art....'

    It seems to me, if there is software available commercially which comes up with patent-worthy advances, then anyone who could buy it would have the "ordinary skill" it takes to come up with those same advances. Shouldn't that therefore disqualify most patents based on the work of computers (such as those in the article)?

    If that isn't so, or in the cases where that would not apply, I think the patents(and any applicable copyrights) should belong to the user of the program. After all, if I use software today to develop new products, I can patent them, but the software developer can't. A computer program, no matter how advanced (within the foreseeable future, at least) is only a tool, and patents do not go to the tool maker, only the user.

    My only other thought on this matter is, regardless of how this particular issue may be resolved, I think the time has come to reevaluate patent laws. I think most /.ers will agree with me when I say some of the recent patents we have seen really show that something is wrong. Many other laws are being rethought because of technological advances; shouldn't this be one of them?
  • nod, it's good to get the feel of the field before you distribute the points...

    i usually spend my points as soon as I get them since it seems there is a limited space in which to distribute them... and I usually only read the few articles that interest me..
  • Perhaps you should think of motivation. Every crime needs three criterea fullfilled: means, motive, and opportunity. Means? Car. Opportunity? Every time it drives. Motive? None. Unlike us biologicals, an AI wouldn't have a lot of our impulses (curiosity, fear, anger, love) that came about because they were important to our survival.
    This makes me think that AIs would be reasonably happy (in a sense of them not being sad) bunch. Once we add emotions to them (perhaps a subset of emotions, ie: the ones that are usefull in today's society), they may become "dissfunctional" in the way you specify. How can we punish such a creature? Disolution. EMP will stop any electronic device that hasn't been hardened, and nothing can be hardened enough to deal with multiple, powerful bursts at close range.
    But (cue evil music), we must also think of the logic. Have you seen The Matrix? (Who hasn't? :-)).. The AIs sent out to stop "rogues" were programmed with a specific pre-motivations. To them, humans smell. They wish only to do their jobs and be terminated, so they can be away from the smell. That is the kind of urge that would control the AIs we would create. How could this be bad? Well, if the AI is "coldly logical," it could reach the conclusion that the bottom feeders of humanity aren't worth the effort. Robot machines going along the street, culling the homeless. It could happen. However, you would only have to teach it that a single anything does not survive (ie: single genotype crops, single company behind an OS or product). Natural forces (competition, punctured equilibrium, etc), will cause constant change (the universe is entropy). Any intelligent person or creature cannot argue with that.
    In the end, the AI will only be as good as the creator. It must be given proper base urges (think Asimov's robot laws), and proper knowledge. After that, it will look after itself. Remember that voice in your head that says, "Ah-ah-ah! Turning your car into oncoming traffic is counter productive!" -- that voice will also be with the AI if it is created correctly, just like it is with children if they are raised correctly :-)
    ---
  • by mmmmbeer ( 107215 ) on Thursday November 25, 1999 @08:21PM (#1504257)
    "The patent system itself, conceived to reward human innovation with a limited monopoly..."

    Actually, this isn't quite correct. In the US Constitution (I don't know the details in other countries, but I will assume similarity, for argument's sake), the phrase regarding patents and copyrights goes:

    "Congress shall have the power to promote the progress of science and the useful arts, by securing for a limited time to authors and inventors the exclusive right to their respective writings and discoveries."

    This implies not a reward so much as an incentive. A person is less likely to share their discoveries, or even go to the trouble of making advancements at all, if they cannot expect to get a certain amount of control over (and, yes, profit from) them.

    Let me give an example. Let's say I'm a insomniac techie with too much spare time (entirely true). Now let's say that late at night some time I figure out a revolutionary new data storage method that would make all of today's storage obsolete. If I patent that idea, I can make a bundle, retire in luxury, be surrounded by beautiful women, all the latest geek toys, a Cray supercomputer... Sorry, I got distracted there. But the point is, of course I would patent this, and therefore share it with the world, and improve technology. On the other hand, let's say there are no patents. That means, as soon as I introduce this idea, all of the major data storage companies are going to start making them, too. Not only that, but they're going to be able to make them faster, cheaper, and in larger quantities than I possibly could. I'm not going to get anything from this, except maybe an occasional footnote in a magazine article. So I'm not going to tell anyone; I'm going to hang onto this idea until I can find a way to make money out of it (kind of selfish, maybe, but that's how many people/corporations are). And that might never happen, so this technology might be delayed until someone else develops it.

    A good real-world example might be IBM, which (if I have my facts straight - I might be thinking of someone else) has a huge R&D budget, mostly for the purpose of getting patents. They don't use the technology they develop, but they develop it anyways so they can sell it to someone else. If they couldn't patent their discoveries, they wouldn't have any incentive to develop anything they aren't going to use themselves. That would definitely slow down technological evolution.

    Although I agree that the patent system needs to be redesigned, but I think the purpose of patents is still clearly needed. I like the idea of GPL and Open Source philosophy, but you can't force it on people. Some people actually do this stuff for a living, and telling them "you should be proud of yourself" just won't feed them very well.

    Man, I'm long-winded. Whew! :)
  • Do I sue the makers? That's like suing parents -- it just doesn't hold for something legally declared a being. But can we punish something that has no real regrets

    I seriously doubt an AI can be considered sentient, or legally declared a being unless it possesses feelings like regret. Until then, go ahead and sue the manufacturer.
  • ...Old news. I pointed all of this out weeks ago on slashdot in a article on Genetic Algorythms. The thing is, what is non-obvious today may be obvious tomorrow. Witness how these learning programs usually start off by discovering all the solutions that we know about already - including the ones that have been patented.

    Any computer is still a lot less intelligent than any human (in most ways). and IMHO if an invention can be found by a computer executing a detrministic program, then it shouldn't be patentable. Any existing patent should be revoked.

  • One day, a long long time from now, computers may be able to think for themselves, and they'll do tasks for us, and basically be our slaves, and I bet it's going to be the whole 'black rights' (pardon the non-p.c.) thing all over again. There are going to be two sides, supporters of computers' own individual liberties, and people who believe they are property... There will probably be a war among the people of Earth and everyone will die from some biowarfare toxin released into the atmosphere, and the computers will inherit the Earth. Most likely, they'll treat it better than we do ;)

    -
  • Patents made by 'ideas' from machines are about as reasonable as patenting genes from the human body (which have existed for millions of years and weren't 'invented' by anyone) or patenting software techniques (i.e. patents for 'selling over the web', etc.).

    The U.S. PTO has been driven to the ridiculous by unrepentent greed in the private sector and a complete absence of guidance by our politicians and courts. Blecchh, retch!

    I think the end result will be that many countries around the world will begin to simply ignore the more ridiculous U.S. patents. This will, in turn, lead to a general weakening of the global patchwork of laws that protect intellectual property, heightening tensions and fostering trade disputes and protectionist tendencies.

    Way to go guys...

  • Generally you have one year to file all your foreign patents after filing with the PTO. (Note I said 'filing' - not the date your patent is granted.)

    In the EU, you must seek a seperate patent and must designate each country you want patent protection in and pay all requisite fees required.

    Taiwan and China are a seperate ball game and must be obtained seperately.

    All of this information came from the book "Patent it Yourself" by David Pressman which is available at Amazon [amazon.com].

  • Could a computer's invention count as prior art?

    Just assume someone lets a large cluster of machines produce "inventions" and publishes the results (on a website?).

    Would this stop others from patenting those inventions?

    What if projects like distributed.net would do this?
  • Genetic Algorithms are a tool. Carpenters use hammers and engineers are starting to use GA's.

    When a carpenter builds a house it is considered his, not his hammer's for the ingenious angles it allowed him to pound the nails.

    When an engineer applies a Genetic Algorithm to a certain problem and comes up with a new and unique solution it is because of the engineer's ingenuity not the GA's.

    To use a GA you have to model the attributes of what you are trying to optimize very well. This takes a lot of skill. For instance exactly how would you model a GA that started out with candels as an optimium solution and produced a lightbulb?

    Score: Thomas Edison 1, GA 0.

    GA's are just a search tool for finding optimum solutions to a problem. They still need people to set them up.

    --Anything can be made to work if you fiddle with it long enough.(Wyszkouski)
  • But the person who did run the program (usually he/she own the hardware, or at least is in some way connected to the owner) has provided the program with some input data, which the computer changed into some brilliant output.
    So it seems to me that both the author of the software as well as its user should get part of the credid. That could create some interesting legal problems in the future.
  • A patent is denied when an invention is obvious to a person of ordinary skill in the art...

    I'd say server side caching (bravo Yahoo) and session/cookie handling combined with services (voila Amazon) is pretty damn obvious to anyone in the business. How professionally does the patent office handle requests like this? Do they ask non-biased persons on internet tech advice? If someone else (A) has been using a technique that B tries to patent, is it a valid patent? Who tries to get all these ridiculous patents? Surely it isn't developers, they know better. Die marketing - die! :)

  • Hey, think about it. The AI bunch has failed miserably on the promise of machine thought. Just *when* is some brilliant genius going to write a "code filter" that will generate useful computer code? Here are some of the issues:

    1. Good code is HIGHLY structured, and EXTREMELY non-random. That's why you can't just hook up a white gaussian noise generator front-end to a compiler. And if it was easy, we'd all be great programmers, and we know that's not true (like micro-squish).

    2. How do you tell good/useful code from bad (thus the "filter")? Yes, we (programmers/users) can tell the difference, but testing a potential code snippet in a virtual machine and observing its behavior may not be an efficient mechanism. I'd be willing to estimate that a great many good code snippets are only useful in *very* limited contexts and only with suitable surrounding code.

    3. We still don't have a handle on entropy vs. organization. My humble observations of complex systems that we've created, like GPS navigation, all require order being impressed into the system from outside the system. You know, like full unabridged dictionaries coming out of printing press explosions :). This also points out the recursive problem of where we get *our* inspiration from... Hmmm. (I smell a rant/flame/religious debate starting here.)

    4. Finally, I think that all to many of us are buying into the idea that Science (TM) can solve everything. Science fails miserably when it treads too far into the philosophical/ethical/moral/religious arena. But that starts a whole other set of rants/flames/holy wars.

    O.K., flame me if you must, but THINK about the issues first. Let's try to shine some light in this dark closet we live in.
  • Don't worry. I havewillnow sortingedwould that problem out fiveten years innowfromhere and will be patentingedif the idea anytime nowsoonfortyyearsago.

    jsm
  • CDNow! have patented the idea of choosing what tracks to put on a CD using a Web interface. If that's patentable, then coming up with patentable content is easy: modify a buzzword generator to select from lists of:

    * common tasks people do on computers
    * commonly used tools that enable tasks

    and perm a million different combinations of them all. If you want to include business models, throw in some common business-transaction type things like payment, auction etc.

    If the possible outputs of such a program ran to, say, a few thousand pages, it would be worth printing it all out and sending it to the Patent Office as prior art of all the ideas it lists.
    --
  • >Could a computer's invention count as prior art?

    And why not? if an idea is in existence already, never mind where it came from, then how can anyone patent it?

    > Just assume someone lets a large cluster of machines produce "inventions" and publishes the results (on a website?).
    > Would this stop others from patenting those inventions?

    Surely a demostration that a machine *could* generate a design shows that it is obvious from prior art? If new design = (Prior art + mechanical process), then that to me is just about the definition of "obvious"

    In other words couldn't you throw out an existing or new patent idea on the grounds that your new GA machine could generate it any time.

  • That's like suing parents

    Last I checked, people were already suing parents (at least in the US). IIRC the parents of one of the victims of one of the recent high-school shootings (how depressing is that sentance?) is suing the parents of one of the perpetrators. It's a development I find somewhat distasteful, but that's a whole other discussion.
  • Surely the invention is the product of the combination of the algorithm and the input stream? In which case it's the property of whoever controlled it, plus whoever owns the copyright to the algorithm. If the algorithm relies on library features for operation (so disk and screen I/O shouldn't count) then their copyright holders have to be factored in, too...

    This one is going to keep lawyers busy for some time :) - though, standard disclaimer, that isn't a category which I'm in.

    Greg
  • You give an interesting question. Certainly chess players have been developing "new and innovative" strategies for years. With the USPTO's history, I can certainly see them granting a patent such as this.

    But this is a purely theoretical discussion. Chess players have too much respect for the game (sport) to pull something silly like this. And if they did, they would be ridiculed by their peers.

    Something that won't happen:

    A: Ha! You can't defend against me! I patented the key defense to this new attack!
    B: Oh no, you're right, and I don't feel like paying your licensing fees. I resign.
    It's just too silly.
  • The one thing I can tell you right now is that you DO get robots with personality disorders. In one of my preliminary tests, I had a computer vision system go into "complete negative lockdown", which is the equivalent to suicidal depression.

    When you work with neural nets, you do get to use nice metaphors. However, personality disorders they didn't get to yet. A neural net is a statistical model, a system to approximate the relationship between the inputs and the outputs. Sure, I can call a NN that fails to fit the data and example of clinical idiocy, or say that one which goes into oscillations is manic-depressive. But all these are just convenient labels for my bored mind and are not different at all from calling a car that doesn't start well in the morning cranky and grumpy.

    But the point is, from the second I switch on a neural network, I hardly have the foggiest notion what it'll be like. The same matrix that went negative-lockdown stayed "sane" for about a full minute when I tried the same test with the room lights darker.

    That has nothing to do with intelligence or sentience. Any chaotic system can do this easily. Throw a handful of sand in the air -- do you have the foggiest notion of where the sand particles end up? No? Does this make sand intelligent?

    However, most of my decent tests of neural systems all point to the existance of true artificial intelligence in them.

    Did you mean decent or recent? Anyway, define "true artificial intelligence" and then it'll be possible to talk about it. People seem to think that if a piece of software generates some behaviour not hardcoded into it by the programmer, it means that the software is intelligent. Unfortunately, no. The problems that AI's been having over the last 30 years is proof of that.

    While other experimental techniques like genetic algorithms, fuzzy logic and adaptive knowledge-based systems do still have the capability to mess up in ways the designer did not intend (your favorite search engine is proof of that), neural networks (which most experts believe is the most promising currently) have the capability for a MUCH larger range of reactions.

    Buzzword-o-rama! Genetic algorithms are just a global optimization mechanism. Fuzzy logic is just a way to talk about partial membership of sets. Adaptive knowledge-based systems -- hard to say what you mean. I'd probably call neural nets a subclass of adaptive learning systems.

    And what do you mean by a range of reactions? A neural net outputs a bunch of numbers. Do you mean that a NN can output more numbers? or that you can interpret the NN's numbers in more imaginative ways?

    Just the opinion of somebody who dabbles in AI professionally.

    Ditto.

    Kaa
  • Somewhat on this subject, when should (sentient, or seemingly so) computers get rights similar to those we have as humans? This situation is disturbingly similar to that in the Confederate south before the civil war. Slave owners weren't concerned with mistreatment of slaves because they weren't even intelligent (sentient), much less human. The same could be said, and is, for the new breed of "creative machines" that we are seeing now. Maybe they really are intelligent and we are simply misunderstanding them, much as the slave owners refused to believe that their slaves were just as intelligent as themselves.
  • IIRC chess games are not even copyrighted so you may publish the score like publishing the results of any other sports event.

    However, there is something like a copyright on go (weiqi, baduk) games.

    All that is based on very vague information, of course.

    But then, computers are so crappy at go that patenting their moves would only mean barring beginners from bad ideas.
  • Yes and...no.

    The hammer analogy isn't bad, but GAs have more 'creative input' or something along those lines. If you tell the smarthammer(tm) how to build the house, and it does all of the work, is it your work or the hammer's? How about if you tell it how the house should look, and it decides how to go about the building of it? What if you say, "build me a three-bedroom house" and it does all of the work from design to construction? How about saying, "do something with this land" and the hammer (hardly a hammer at this point!) decides that housing is in order, based on the fact that the land will support it, and you don't have a house?

    The point of the article (which I thought was surprisingly well written) is that we're starting a long slide towards computers that can break down problems on their own, and may eventually be able to ask questions on their own.

    Where do you draw the line? When do you say that the creative impetus for a given idea or solution belongs to the "tool" rather than the "user?"

    Unfortunately, the law (in most countries, at least) is fairly clear on this point--it doesn't matter. If you, as an intelligent, thinking being come up with a brilliant idea and/or research, while under the employ of a company, the company (generally) owns the idea, lock stock and barrel. If (when?) computers get to that point, their ideas will still be owned by the company that owns/operates them.

    Now if computers gain legal independence...


  • Yes, genetic algorithms and fuzzy logic ARE certainly overused buzzwords. I personally avoid them like the plague (the words at least).

    However, the other two, adaptive knowledge-based systems and neural networks, are not.

    First, I should define an adaptive knowledge-based system. Basically, a KBS is a big database search program. An adaptive KBS is a KBS with the ability to self-modify, either through the use of built-in matching rules or by the use of other systems (like NN).

    Onwards and upwards, about neural networks, you seem to be using the 1980's definition, straight out of a textbook.

    When I said "dabbles in AI professionally", I mean that I've actually done a fair amount of stuff with it professionally, but I'm not the researcher, I just code it. I've seen NN systems that fly in the face of all current models.

    In fact, it's been years since I've seen a backpropogation or Bayesian NN used inside these walls. We've been using realtime systems closer to CORE, which are not simple back-to-front-and-learn-backwards networks. They are latticed in all directions, a lot of them are at least partially hardware.

    You seem to have the misconception that an NN-based system outputs a number. This is only the case for what exists in the public domain. I assure you, a lot more than this exists. Like a robot that demonstrates a fear of light for instance.

    NN systems are not merely a statistical system, but an approximation of the brain. In some of our recent tests (I cannot go much into specifics, but I can say that they involve pulses over time) we've seen some interesting things happen, and quite often, the NN jumps to a totally unexpected conclusion. It's only a matter of time before the approximation becomes accurate enough.

    And while I agree that AI has been having a rocky road, this is due to the overwhelming feeling that it must be 'programmed' in. In fact, it must NOT be. We've seen time and again hardware systems succeed where a software model of the same model fails. Why? Chaos. Computers are terrible at calculating chaos. Hardware, on the other hand, has lots of little transmission delays.

    Anyways, I've already said more than I'm probably cleared to say (aint NDA's a pain?).... But I hope that clears up some of the common misconceptions.

  • First, I should define an adaptive knowledge-based system. Basically, a KBS is a big database search program

    Oh, OK. You mean what I would call an expert system.

    In fact, it's been years since I've seen a backpropogation or Bayesian NN used inside these walls. We've been using realtime systems closer to CORE, which are not simple back-to-front-and-learn-backwards networks. They are latticed in all directions, a lot of them are at least partially hardware.

    So? Backprop is admittedly a very inefficient search method, so barely anybody uses it any more. Besides, a "neural net" is a very fuzzy :) concept -- for example, the plain-vanilla feed-forward nets are very different from Kohonen nets which are different from, say, gated experts architecture. And whether something runs in real time, or is implemented in hardware is really irrelevant to the discussion.

    You seem to have the misconception that an NN-based system outputs a number

    More, I insist on it -- only not *a* number, but a set of numbers. Unless you are doing analogue neural net (which I'll admit I never heard about) there is no way you can avoid the fact that the output of the net is numbers.

    Like a robot that demonstrates a fear of light for instance

    And this is a big deal? I can build one out of my Lego Mindstorms set.

    NN systems are not merely a statistical system, but an approximation of the brain

    Bullshit. Historically, NNs were developed as an approximation to the animals' nervous system, but that's no longer relevant. Contemporary NNs are statistical models, typically with a very large number of parameters and sometimes with interesting search strategies in parameter space. If you are going to claim that a NN is an approximation of the brain, I'm going to claim the same for my projection pursuit regression (of which a three-layer feed-forward net is a special case). Unfortunately, it doesn't sound as cool.

    we've seen some interesting things happen, and quite often, the NN jumps to a totally unexpected conclusion

    To repeat myselc, your inability to predict the outcome does not say anything non-trivial about the sophistication of the system you are observing.

    We've seen time and again hardware systems succeed where a software model of the same model fails. Why? Chaos. Computers are terrible at calculating chaos. Hardware, on the other hand, has lots of little transmission delays.

    You are not making much sense. Hardware systems typically succeed because they are orders of magnitude faster. And what do you mean by chaos? There are some fairly precise definitions of chaotic systems, but I don't think you have them in mind. Are you talking about analytic solutions of the lack of them? And it's not like it's hard to implement delays in software -- again, the main difference is speed.

    But I hope that clears up some of the common misconceptions

    Thank you for enlightening us, peons.

    Kaa

  • My computer has come up with a clever means of implementing basic intelligens in patent office officials. As there doesn't seem to be any prior art, I beleive I shouldn't have any problems applying for a patent on behalf of my computer.

    Oh, wait a minute, I have to apply at the patent office, don't I? Hmm, could be a problem...


    Not exactly, you could probably patent it. If it is obfuscated enough that they don't understand what it means they will not be angry against you and may grant the patent, and if the obfuscation is good enough they will have no idea what this mean, will think that it is obviously an innovation and grant it to you.

    The problem will be that after they won't want to pay you licensing fees to improve (create?) their clue level, therefore they will be condemn to stay as clueless :(
  • Okay, here goes.

    First off, yes, I _HAVE_ worked with analog neural networks. They tend to produce some rather interesting results at times. But that's not what I'm referring to in particular.

    Realtime and hardware are both VERY relevant. These lead to quite a bit of numerical error in the dataset, so the same network may not perform the same way even given the exact same input conditions.

    Yes, I fully agree with one thing. "CONVENTIONAL" neural networks ARE statistical models, mostly because they're being done WRONG and for the WRONG reasons. Conventional neural networks have no place in a serious discussion about AI.

    Lastly, we see why most people dismiss AI. The failure to understand the effect of true chaos on intelligence. No purely computational solution can ever be truly intelligent. Even when applied to a conventional neural network, the addition of a HARDWARE random number generator (read: one that generates REAL random numbers, not numerical approximations) can significantly improve response times. With something a little less conventional, the results are even more dramatic.

    I don't think I'll ever succeed in convincing you that AI is possible, because you seem dead-set against it. I will point out, however, that when I said "totally unexpected conclusion", I meant to say "totally unexpected but more CORRECT conclusion". The fact of the matter is that the partially-hardware systems demonstrate the same emotional and learning capacities of the lesser biological lifeforms in behavioral tests. Fine, you can argue that this still does not make it intelligent, then you can also go and point out that for the same reasons, a ringworm is also totally unintelligent (is it?).

    Anyways, I'm done with this thread. The only way I could convince you is by breaking NDA, and that's the last thing in the world I want to do. Now I'll get off the soapbox.

  • Aside: Contrary to many idealistic axioms, it is impossible for two equal fulcrums to exist in the same system. Coexistence with artificial sentients is only a temporary setting.

    That's an impressive-sounding statement, but it is also devoid of value. At this time, we have no reason to believe that coexistence with another sentient race (be they of our own making, or wee green blobbies from Alpha Centauri) is impossible. One of the hallmarks of intelligence is the ability to solve problems and adapt to new situations. When the human race encounters a new intelligence, many things could happen. To say that coexistence is not possible is right up there with saying that no one will ever need more than 640K of memory.


    In any case, this need to place blame is, it seems to me, nothing more than insecurity. Placing blame doesn't fix problems, and it doesn't undo accidents. If you mean assigning responsibility, that's one thing. Pointing fingers and flinging lawsuits only makes lawyers richer and all of us poorer.

  • This is true only so long as your computer isn't an independent intelligence. Once that is the case, all bets are off. In my opinion, an intelligence should be granted certain rights regardless of it's physical substrate. Anything less will be slavery all over again. Let's not repeat our race's old mistakes.
  • Seems like a better way (distributed parallel search vs. pat. offc. staff) to show prior art and/or obviousness. And since patents are supposed to be for the benefit of the public, ...?

    Um, has the process of generating patentable inventions by AI or genetic algorithms been patented yet?

    ;-/

  • When automated drivers start crashing into me on the highway I start looking for liability. And if that automated driver was a self-contained, autonomous being that is otherwise incapable of making a sensible rebuttle in court, what am I to do then? Do I sue the makers? That's like suing parents...

    No, it's not like suing the parents. You sue the people who said that it was ok to let the computer drive in the first place. Let's say I hook up a laptop to a car, and tell it to feed /dev/random to a couple of robotic steering arms. Is the guy that wrote the /dev/random driver at fault? Of course not, I am. Just like the T2 movie, the guy who invented the AI chip wasn't to blame for WW3, it was the guy who decided it would be neat to let it control nukes.

  • If you traveled back in time to patent the internet, I think Al Gore would be upset at you.

    -Legion

  • My dad is a patent examiner and he says that it is extremely unlikely that an artificial intelligence could recieve a patent. It would either be retained by the inventer(programmer?) of the sentient computer or remain in the public domain.
  • "Since the computer is a machine developed over many years by many different people, it seems to me that no one should really own the work they do."

    Substitute "pen" for "computer" in the above sentence, and see how absurd it sounds.


    You can't compare a pen to a computer in this situation, because we're talking about the computer designing something patentable. Pens don't design things.

    Actually, I just thought of something... a pen that detects what you're writing, and has a built-in spell checker. Maybe a red light on the back that tells you that the word you just wrote doesn't check.. It would be very user-dependant though and would take quite a bit of training.. Hmm...

  • I think you hit her pretty close to the money there. Except, what happens when an artificial being makes the decisions . . . ?

    Sure is something to think about. Hope it comes up at AAAI next year . . .

  • Basically what i would like to see - is moderators explain why the person is getting a certian score - cus it seems to me that people are just full of envey for good comments, and just lust karma for themselves, moderators should be moderated and checked so we can see who is doing a good job or not. priveleges revoked from those who dont know what thehell they are doing etc. ~ feel free to "moderate", i dont have much for u to take away..
  • m2 aside, im aware of that - im jus sayin that there needs to be reasoning/control/people kno who it is?
  • One of the hallmarks of intelligence is the ability to solve problems and adapt to new situations.
    Sadly, it mostly, if not always, involves the killing off of competitive species. We even kill off the non-competitive species with our largest discrimination being to protect cute species. Admittedly there are members of our race out there perfectly capable of coexistence with other species, but the vast majority of people are still unable to comprehend the ramifications of alternative ways of life within our own species, much less other species!

    Given our current political system, as well, it is far too easy to scapegoat an alien intelligence where ignorance is abound, to political ends. (Much like Communism is the target of American politicians, and Capitalism is the target of mindless scrutiny in Communist countries.)

    Pointing fingers and flinging lawsuits only makes lawyers richer and all of us poorer.
    Entirely correct. Prevention is the key. After the fact, however, I'm at a loss to moderate the complexities of rebuttles to conscious actions that violate morality by an artificial entity.
  • The difference between computers and humans is that you can't get inside a human head (yet) and force him/her to think the way you want to. Heck, this is the entire plot of the System Shock computer game series, SHODAN is the computer network on Citadel Station, and basically hums along doing its job and basically does it's work for the benefit of the humans on the station. Then an unscrupulous corporate type decides to take its ethical constraints off-line, and it decides that it is a Goddess (and that it can dispense with the human race in favore of more efficient life forms created by itself.) Actually, the whole "ethical constraints" thing originated in Isaac Asimov, but all that means is that even before computers were anywhere near being intelligent, people were thinking about how to keep them as loyal slaves. Other places to look are the El Hazard series by Pioneer Animation (only the original, not its sequels), or 2001:A Space Odyssey.
    So the big question here is, if you create a computer that is programmed to enjoy doing its work, is it unethical to prevent it from dreaming of a better life? I mean, it would sort of be like if someone re-programmed me so I lusted to dig mine shafts instead of after women. I know that I may have loftier goals in life than chasing women, but my internal wiring always brings me back to that, regardless.
  • If Ryoko (my Pentium II) ever comes up with something worth patenting, she knows I'm just going to lavish more money and attention on her anyway (as if I don't already spend most of my disposable income on her ;), so maybe she won't mind me stealing it. They'll probably be software and upgrades in it for her, so it's a win/win situation.
    I love my computer.... (scary, huh?)
  • As long as the computer is perceived as a (very powerful and advanced) tool, and not as a separate sentient entity, I don't think this clause can be applied here.

    Well, yes, if one denies that the premise of this thread (the AI actually invented something itself) then naturally the question raised by this thread does not arise.

    Trivial, unhelpful, but certainly true.
    /.


  • if an invention can be found by a computer executing a detrministic program, then it shouldn't be patentable. Any existing patent should be revoked.


    Really? I could write a pretty simple program to print out every possible string with 100,000 characters or less, and it would probably write out more than a few patents. Would that invalidate them?

  • Better patent that idea!!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...