Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security News

Security By Obscurity — a New Theory 265

mikejuk writes "Kerckhoffs' Principle suggests that there is no security by obscurity — but perhaps there is. A recent paper by Dusko Pavlovic suggests that security is a game of incomplete information and the more you can do to keep your opponent in the dark, the better. In addition to considering the attacker's computing power limits, he also thinks it's worth considering limits on their logic or programming capabilities (PDF). He recommends obscurity plus a little reactive security in response to an attacker probing the system. In this case, instead of having to protect against every possible attack vector, you can just defend against the attack that has been or is about to be launched."
This discussion has been archived. No new comments can be posted.

Security By Obscurity — a New Theory

Comments Filter:
  • by tech4 ( 2467692 ) on Saturday October 01, 2011 @06:16PM (#37579776)
    I hate it when people always seem to take the phrase out of context and apply it to mean any kind of security, like network security or the old Windows/Linux battle. It's a completely different kind of situation, and in the former it's especially true that security by obscurity is a hardener layer. It's also why Linux has managed to stay as (consumer) malware free to day, even though it still has a fair share of its own worms and other security problems.
    • by davester666 ( 731373 ) on Saturday October 01, 2011 @06:38PM (#37579914) Journal

      This part of the summary is just great: "... is about to be launched"

      Yes, having somebody sitting there as the attack is taking place and somehow guessing how the attacker will try to compromise your system makes it much easier to defend against the attack. Of course, just correctly guess sooner, and then you can fix the system beforehand and then you don't need someone sitting there....

      • by elucido ( 870205 )

        This part of the summary is just great: "... is about to be launched"

        Yes, having somebody sitting there as the attack is taking place and somehow guessing how the attacker will try to compromise your system makes it much easier to defend against the attack. Of course, just correctly guess sooner, and then you can fix the system beforehand and then you don't need someone sitting there....

        It also assumes we can determine the capability or the resources the enemy is willing to employ. It's a lot safer to assume you don't know than to try and assume you know.

        • by ghjm ( 8918 )

          Not necessarily, if the money you spent trying to defend against all possible attacks means that you can no longer have seat belts.

      • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Saturday October 01, 2011 @08:16PM (#37580458) Homepage Journal

        Of course, just correctly guess sooner, and then you can fix the system beforehand

        One method to make such a guess is called a "code audit", and code auditing practices applied since mid-1996 [openbsd.org] are part of why OpenBSD has had only two remote vulnerabilities for over a decade.

        • Come on, you are way off topic here. You deserve the troll remark. It's about obscurity as a risk mitigation factor, not as an unbreakable defense. That has nothing to do with what OS is better at staying secure. All "major" operating systems get code reviews. Once they get more popular, they get more people reviewing code and probing for vulnerabilities. I'm fairly certain Windows and OSX get more code reviews and probing than FreeBSD does. If you want to spend time finding a vulnerability in an OS for pro
          • Come on, you are way off topic here. You deserve the troll remark. We're talking about OpenBSD, not FreeBSD. You didn't read the comment you replied to and you don't know what you're talking about anyway. OpenBSD is rarely used, but when it is used, it is used because it is protecting something, and that means that the value of attacking it is very high; virtually every OpenBSD system not on some nerd's desk is guarding something important to someone.

        • by TheLink ( 130905 )
          And MSDOS has had zero remote vulnerabilities in the default install for longer (you can add TCP/IP support to MSDOS, but it's not there by default).

          Seriously, the main reason why OpenBSD had few remote vulnerabilities in the default install was because they only had one service running in the default install- e.g. openssh. ( http://en.wikipedia.org/wiki/OpenBSD#Security_and_code_auditing )

          If some idiot installed phpnuke/phpbb, apache with an outdated version of the app, php etc, they'd be just as pwned whe
    • by cgenman ( 325138 ) on Saturday October 01, 2011 @10:09PM (#37580932) Homepage

      The problem is that Security by Obscurity is the defense of lazy vendors who should damn well know better. On the one hand, it's "obscure" that a particular keyphrase known by trusted people will get you to a layer of network security. It is slightly less "obscure" to have your server up on an unresponsive IP address. It's technically a form of "obscurity" to think the hackers wouldn't notice that you left an FTP server up and running without realizing it, or that the default login was still viable. But when vendors use that form of the term obscurity, they're just masking the fact that they are selling you rubbish.

      Any properly secured system should be able to proudly proclaim all of its pertinent information to the world, including source code to all available participants, and still be secure. ONLY THEN, should obscurity be layered on. But if your vendor or contractor starts talking about obscurity first, they don't have a clue what they're doing.

      Obscurity is icing. Minimalist, properly protected system design with multiple layers of protection, iron-clad internal logging, and no routes to priviledge escalation (especially social) is the route to security. Obscurity is a mildly nice icing that makes maintaining servers less problematic. It also usually leads to lazy vendors creating the illusion of security out of a soon-to-be-had massive privacy lawsuit.

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday October 01, 2011 @06:17PM (#37579788)

    Obscurity only makes your security "brittle". Once broken, it is completely broken. Like hiding your house key under a flower pot.

    Which means that the real security is the lock on the door. All you've done is allow another avenue of attacking it.

    • by jhoegl ( 638955 ) on Saturday October 01, 2011 @06:22PM (#37579824)
      There is another way to look at this.

      Imagine you have gold behind a locked door. Now imagine you have 50 locked doors.

      This is your security through obscurity.
      • by Cryacin ( 657549 ) on Saturday October 01, 2011 @06:24PM (#37579842)
        Well, if you had them behind 2^128 you'd have a trust certificate :P
        • by jhoegl ( 638955 )
          Hahah, good point.

          Although these days CA authorities are becoming the weak link.
          They will have to rethink centralized security, big time.
        • by jbengt ( 874751 )
          I, for one, don't trust certificate "authorities"
      • There is another way to look at this. Imagine you have gold behind a locked door. Now imagine you have 50 locked doors. This is your security through obscurity.

        You hid the gold under the floorboards. Consider your security broken.

      • Does the attacker have to get through 50 doors to get the gold? Not all locked with the same key? (etc) This is good security (unless locked with the same key and so forth). ..or..
        Does the attacker have to get through ONE door that is NOT locked (the security depends upon the attacker not getting the right door) ? ..or..
        Does the attacker just have to check the doors for recent fingerprints to guess which door to attack?

        • by jhoegl ( 638955 )
          Well, there are many methods. One would be honeypotting, another would be and in line with the "Security through Obscurity" thinking, you have to choose which door to attack. The point being, the hacker doesnt know because of security through obscurity. What you can do is Honeypot all the other doors and know about the attempt, or setup an alert and know about the attempt.

          Frankly, if it is that important to be connected to the internet, but requires high security, the cost is justified.

          You can even set
          • One would be honeypotting, another would be and in line with the "Security through Obscurity" thinking, you have to choose which door to attack.

            Just as in my house key example. The attacker has to know WHICH flower pot has the house key.

            The problem is that once that piece of information is uncovered, the entire security implementation is broken.

            The point being, the hacker doesnt know because of security through obscurity.

            Yes, I understand the concept. I just don't agree with it. Again with the house key exa

            • by jhoegl ( 638955 )
              I am not suggesting leaving it open and just not telling anyone. That would be crazy.

              What you want to do is keep it secure as possible, but give the potential intruder something else to work on that yields no results, but increases their risk of exposure.
              Security through obscurity does not automatically assume that it is a door left wide open, just no one knows about it.

              Consider things that are currently unknown to the public, such as Air Force one. Only a few people know about its defenses and potenti
              • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday October 01, 2011 @07:19PM (#37580156)

                I am not suggesting leaving it open and just not telling anyone. That would be crazy.

                No, that would be "security through obscurity".

                What you want to do is keep it secure as possible, but give the potential intruder something else to work on that yields no results, but increases their risk of exposure.

                But that does nothing to improve the security of the system. If the attacker choose the correct door (or whatever) then you're left with only the defenses of that door.

                Security through obscurity does not automatically assume that it is a door left wide open, just no one knows about it.

                No. The "security THROUGH obscurity" means that the door IS unlocked (or unlockable with the hidden key) and that the "security" comes from no one KNOWING that it is a way in. That's what the "through" part of that statement means.

                Do you understand the thinking now?

                I've always understood it. And you're making a very common mistake. Obscurity != Secret in "security through obscurity".

                • by artor3 ( 1344997 )
                  Man, you beat the ever-loving shit out of that strawman!

                  Nobody talks about security exclusively through obscurity. Secrecy is just an added layer.

                  The added security of many eyes reviewing your code makes up for the loss of security from having the code visible. <i>That</i> is why Linux is more secure than Windows. But security through obscurity is not useless.
            • by vux984 ( 928602 )

              Just as in my house key example. The attacker has to know WHICH flower pot has the house key.

              The problem is that once that piece of information is uncovered, the entire security implementation is broken.

              There are other ways to have obscurity.

              What if you put the lock for the door underneath one of the many flower pots, and perhaps even have a completely non-functional keyhole on the door itself.

              That is also "security through obscurity".

              Moving the lock to an an unusual place certainly doesn't make system any

              • Not exactly. (Score:4, Interesting)

                by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday October 01, 2011 @09:03PM (#37580682)

                There are other ways to have obscurity.

                What if you put the lock for the door underneath one of the many flower pots, and perhaps even have a completely non-functional keyhole on the door itself.

                That isn't "obscurity" in the context of "security THROUGH obscurity". The word "through" is important there.

                You can have a functional security system and add misdirection to that without reducing the overall security of the system. But the system, in the end, still depends upon the original security model. Once the correct key hole is known, the lock still must be cracked.

                You can add obscurity without making the security dependent upon the obscurity.

      • by wisty ( 1335733 )

        There is another way to look at this.

        Imagine you have gold behind a locked door. Now imagine you have 50 locked doors.

        This is your security through obscurity.

        That is *not* security through obscurity. There are 50 locked doors - that's about 6 more bits of password strength, but it's not obscure that you need to go through one of the doors.

        Hiding your key under the flower pot is a better example of obscurity. As is hiding your money in the freezer, or in your sock drawer. Ask someone who has worked in a prison, or served time - most people tend to come up with the same banally unoriginal ways to hide stuff, and the bad guys are pretty good at figuring those metho

    • by bondsbw ( 888959 )

      Put up more doors with more locks... that'll fix it! (Just don't tell them about the hidden door into the basement...)

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • by jhoegl ( 638955 )
        Exactly.

        In fact, viruses are developed based on obscurity. I mean, it is in our everyday lives. To believe that obscurity is somehow the Achilles heel is just crazy thinking.
      • You have it wrong. (Score:4, Informative)

        by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday October 01, 2011 @06:52PM (#37580006)

        And once you guess their encryption password, their encryption isn't completely broken?

        You're confusing the "obscurity" portion of that statement.

        Passwords should rely upon the difficulty in cracking them due to their complexity. The system is known. The password is not known.

        Security through obscurity refers to the workings of the system being hidden. Such as the key under the flower pot opening the door. Once that information is discovered, the system is cracked.

        • by jhoegl ( 638955 )
          So once someone gets your password, the access is granted?

          So how is this different?
          • by khasim ( 1285 )

            In the end, it all comes down to time.

            If it takes you 20,000 years to crack my password with a password cracker, then the system is secure for 20,000 years. After which it is cracked (until I change my password again).

            If the password is hidden on a post-it under my keyboard, then there is an easier, alternative avenue of attack. And the system is cracked in a minute.

            So, having the "security through obscurity" resulted in a less secure system that was cracked a lot quicker than the original system.

            That is wh

            • by hazem ( 472289 )

              > That is why you do not use "security through obscurity".
              Well, if you define "security through obscurity" to such an absurd point, then of course there's no value to obscurity.

              However, obscurity is an important part of any security system, but only an idiot would rely on obscurity as the only source of security, and only someone being obtuse would assume that that's what others mean.

              Soldiers use "security through obscurity" by wearing camouflage. It's by no means their only means of security. I helps

              • Well, if you define "security through obscurity" to such an absurd point, then of course there's no value to obscurity.

                You may view it as "absurd" but it having no value is the whole point.

                In these SPECIFIC instances, obscurity only REDUCES the security of a system.

                Soldiers use "security through obscurity" by wearing camouflage.

                The problem is that we're discussing computer security. Physical security is a different matter and has very limited usefulness as an analogy.

                Of course if you want to narrowly define

          • The idea of any security system is to reduce the number of fatal secrets. The minimum number is one. (Otherwise you have an open-access system.)

            Your password, or key, should be that one. It shouldn't matter if the attacker gets everything else, they still can't get your data.

            'Security Through Obscurity' is saying 'we've removed this fatal secret by hiding it from the attackers'. Um, no. All you've done is made it slightly harder for them to find. It's still a fatal secret. If you want to remove it fr

        • Security through obscurity refers to the workings of the system being hidden. Such as the key under the flower pot opening the door. Once that information is discovered, the system is cracked.

          Security through obscurity doesn't mean that you hide the flaws instead of patching it (it can mean that, but it's a narrow definition). Even when you patch the holes, it's still worth it to make it as hard as possible for the attacker to figure out what the state of your system is - let him waste time looking for the flaws that simply aren't there. That's security through obscurity, too.

          It's just another layer. Ignoring it makes sense only when you're absolutely confident that your other layers will hold.

      • by burris ( 122191 )

        No because you can change the key, which is much easier than changing the cryptosystem. With a good source of entropy, I can generate large numbers good keys all day long. Good cryptosystems are much harder to come by, so the cryptosystem is designed to make changing keys easy. Cryptosystems are also designed to minimize the impact of a single key being discovered. Forward secrecy, for instance, where stealing a key might not get you anything at all.

      • No, the encryption ISN'T completely broken. If I have an encryption system that uses passwords for security, and you guess my password, the security is broken for this instance of the system...but I can just pick another password and security is restored. "Security through obscurity" doesn't mean security based on ANY secret, it means security through secrecy in some fundamental element of the system, especially when such a secret makes the system brittle. If you steal my key, I can simply rekey a lock a
      • by jbengt ( 874751 )

        Your analogy is flawed, fundamentally you are assuming someone leaves a key lying around in an easily accessible area. No security we have isn't fundamentally based on obscurity. None.

        Secrecy is not identical to obscurity. The meaning of obscurity in "Security Through Obscurity" refers to the overall scheme and methods. The secured secrecy of keys and the like is assumed and does not mean that the security system is based on obscurity as understood in the context of discussing security through obscurity.

        • by thsths ( 31372 ) on Sunday October 02, 2011 @08:01AM (#37582694)

          > Which bank would you prefer?

          And that is the key point. Real security can be audited without compromising it. Obscurity cannot be audited - you have to take their word that it is "obscure" enough. And what is obscure or inconceivable to some person may be perfectly obvious to another (such as a blackhat with actual security skills...).

      • by burris ( 122191 )

        Here is a real world example where getting a key gets you nothing. Lets say you're targeting someone specific to get their secret cookie recipe or their confession and you've installed a wire tap on their net connection and you've been recording all of the traffic. The target has been chatting with their friends over some encrypted chat thing and you're sure they've been discussing the recipe/crime. So one day your goons stop the mark, steal their laptop which contains their private keys, and beat them w

      • No security we have isn't fundamentally based on obscurity. None.

        Yes, we have no bananas. You didn't mean to use a double negative, did you?

    • by thegarbz ( 1787294 ) on Saturday October 01, 2011 @06:41PM (#37579940)

      Which means that the real security is the lock on the door.

      But that is also just obscurity in another form. The obscure part is that the attacker doesn't know the combination to the lock, or doesn't know how the tumblers specifically are keyed. Otherwise a key could be made up.

      All security is obscurity, just different levels of it. In some schemes the obscure value is shared (hidden directory on the server that isn't crawled but can none the less be accessed by a direct link). Some obscure values aren't (public key encryption).

      The hiding the key under the rock is analogous to using a weak form of obscurity to hide a strong one. Which in this case is no better than the obscurity of not letting anyone know that the door lock doesn't actually work anyway.

      • But that is also just obscurity in another form.

        Nope. Similar to the use of "theory" in science. The common usage of the word is not the exact same as the usage in this context.

        The system is designed so that it can only be opened by the correct secret (the key in this case). That does not mean that the key is "obscure" even though it is the "secret".

        Obscurity refers to the system. The key is still the secret. What the obscurity is is the fact that you're hiding (obscuring) the secret under a flower pot.

        To p

        • by Prune ( 557140 )
          You're shamelessly playing with word semantics here. We use von Neumann machines and the distinction between data and progtam is arbitrary and only in the mind. A key and a system are only separate from a subjective view. There is no reason other than practicality why we have many keys but few crypto systems. One could trivially create a set of systems which can have an exponential number of variations on the underlying algorithm,with automatic generation of these variations. Then the specific set member is
          • You're shamelessly playing with word semantics here.

            No. It's the usage of the terms in the context.

            The same as people complain about evolution being "just a theory". The words have multiple definitions and using the incorrect one in this context is incorrect.

            One could trivially create a set of systems which can have an exponential number of variations on the underlying algorithm,with automatic generation of these variations. Then the specific set member is your secret. There is no distinction between secret

      • by Dr. Tom ( 23206 )
        The key is a secret. If it gets loose you have no security. However, the security protocol (if it is a good one) will allow rekeying; keys are one-time only, and if a key is revealed you can immediately switch to a new key the attacker doesn't know (keys are just random numbers).
    • But, isn't the pattern to the very lock you describe a "secret" or obscure in as much that the lack of knowledge about how to duplicate that key is what keeps intruders out?

      Most forms of security rely on some form of obscurity to decide which group of people is allowed access and which group of people is not. A password or a private key, if known to everybody would allow everybody into the system. Only those who hold that extra piece of information are able to access the system through the means by which

    • I agree. Security in CompSci has to be a bit more than putting up safeguards (fw, av, encryption) or going from one DES to triple DES just to make brute force attack more difficult. Surely the only solution is to develop a language or maths for it. This way we can reason about security problems and be able to say for sure: this is provably secure just like 1+1=2. After the logic comes the implementation details.
      • by gatkinso ( 15975 )

        Many crytpo schemes are provably secure.

        However the implementation itself could be flawed, providing a side channel that can be exploited.

    • by gatkinso ( 15975 )

      Yes, well, what if they can't find the lock?

  • Sure (Score:4, Insightful)

    by EdIII ( 1114411 ) on Saturday October 01, 2011 @06:19PM (#37579798)

    That's fine and all. If you want to create your security through incomplete information, or different tactics and strategy, that is a choice.

    Just don't be a childish whining little bitch and run to the FBI to stop the big bad anti-social "hackers" from revealing your used-to-be incomplete information in security conventions and trying to have them arrested.

    You get double whiny bitch points trying to invoke copyright to prevent the "leakage" of your incomplete information.

    I certainly get the point of the article, but a system that is secured through well thought out and tested means will always trump a system where, "Golly Gee Willickers Bat Man.... I hope they don't find the secret entrance to our bat cave that is totally unprotected and unmonitored".

    • What's a password - or even a private key - if not incomplete information?

      • Re:Sure (Score:5, Insightful)

        by EdIII ( 1114411 ) on Saturday October 01, 2011 @06:52PM (#37580008)

        I don't think that is what they mean by incomplete information.

        In the context of security through obscurity it has always, to me, seemed to mean that your method and process of providing security is not well understood and it is this fact that is providing the majority of the security. If somebody figures out the method or process, your security is greatly compromised.

        A password, or private key, is not a good example in this case. I think a better example would be that passwords and private keys protect documents created by a certain well known company, but that their methods and processes were so laughable that you could create a program to bypass the keys themselves.

        Or in other words........ the only thing keeping Wile E Coyote (Super Genius) from getting to Bugs Bunny though the locked door is his complete lack of awareness that there is nothing around the door but the desert itself. Take two steps to the right, two steps forward, turn to your left, and there is Bugs Bunny. You did not even have to get an ACME locksmith to come out.

        • you attempted to redefine his terms, and then you attempted to change the topic. in other words, you don't have an answer

          aka, incomprehensibility by affability

          because the real answer would be to concede that icebraining is correct: it's just a matter of perspective of what security is, and what obscurity is, and, on some philosophical level, they are indeed the same concept after all. not that this is a mighty a thunderclap of a realization, and not that it completely changes security paradigms. but it is i

          • Re:Sure (Score:5, Informative)

            by EdIII ( 1114411 ) on Saturday October 01, 2011 @07:16PM (#37580152)

            Uhhhhhh..... okay

            I am not redefining terms here at all.

            Granted, this is from Wikipedia:

            Security through (or by) obscurity is a pejorative referring to a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. A system may use security through obscurity as a defense in depth measure; while all known security vulnerabilities would be mitigated through other measures, public disclosure of products and versions in use makes them early targets for newly discovered vulnerabilities in those products and versions. An attacker's first step is usually information gathering; this step is delayed by security through obscurity. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.

            icebraining is not correct here, and your assertion I am changing the definition from the norm and widely accepted definition is false. Security through obscurity, as a concept, is not something vague and a matter of perspective. It is a very well defined term in security and has been for quite some time.

            According to the definition above, a password is not incomplete information, or information being obscured, as it is being presented in the context of the article and the principle of security through obscurity.

            Making this a philosophical debate that a password is also obscurity at some level has nothing to do with the principles that are mentioned.

        • mean that your method and process of providing security is not well understood and it is this fact that is providing the majority of the security. If somebody figures out the method or process, your security is greatly compromised.

          Not necessarily. It may also mean using a public and well-understood method - but not telling which method you're using, so the attacker has to figure it out on his own.

      • Re:Sure (Score:4, Insightful)

        by moderatorrater ( 1095745 ) on Saturday October 01, 2011 @08:36PM (#37580554)
        It's an identifier. Security through obscurity is where methods, processes and algorithms are hidden in an attempt to create security. It's the difference between having a vault door with a lock and having a hidden door with no lock.

        Passwords and private keys are very specific pieces of information that use algorithms to make it mathematically (almost) impossible to figure out. Obscure processes and methods and algorithms, on the other hand, are negligibly easy to find out when it comes to computers. Computers are too powerful to hide something from them (with a few exceptions mentioned above). Relying on obscurity is a fools game in those circumstances.
  • Nature disagrees (Score:3, Interesting)

    by Anonymous Coward on Saturday October 01, 2011 @06:21PM (#37579816)

    Camouflage is the oldest and most natural form of security on the planet.

    • by RenHoek ( 101570 )

      Carrying a bigger stick then your opponent is the oldest and most natural form of security.

      • Camouflage is the oldest and most natural form of security on the planet.

        Carrying a bigger stick then your opponent is the oldest and most natural form of security.

        Actually its camouflage *plus* the bigger stick. The camouflage gives one the potential advantage of deciding if and when the bigger stick comes into play.

  • Kerckhoff's Principle specifically applies to cryptosystems. Not only does TFA describe more of a generalized application to systems and code, but it's not really describing 'security through obscurity.' It's describing informational arbitrage, i.e., profiting (not necessarily financially) from an imbalance of knowledge on one side of a two-participant game.

    The dynamic adaptive approach has its merits, particularly as it is increasingly clear that most security is only the illusion of security, maintained until it is breached. But traditional 'security through obscurity' refers to systems for which the only security measure in place is maintaining the secrecy of a protocol, algorithm, etc.

    It seems to me the ideal approach is a balanced one, that embraces the UNIX philosophy: cover the 90% of most common attack vectors with proven security measures (and update practices as needed), and take a dynamic adaptive approach to the edge cases, because those are the ones most likely to breach if you've done the first 90% correctly.

  • Call it luck, or educated guess, call it fate for all I care. One miss, and you're screwed.

  • I thought it was obvious.
  • by Dr. Tom ( 23206 ) <tomh@nih.gov> on Saturday October 01, 2011 @06:30PM (#37579878) Homepage
    Security by Obscurity is lame. The REAL test of a good security protocol is when you publish ALL the details and the bad guys STILL can't get in. If you are merely relying on somebody, somewhere, not saying anything, you are asking for it. All the real security products that people actually trust are open source. I will never, ever, ever, ever, trust anything that is closed source. There could be a back door, and you can't argue with that. Again, and again, and again, the ONLY security algorithms worth talking about are OPEN. If you can publish your work in public and STILL be secure, THAT is security. That is quite possible, it has been done many times. If you can't do that, you are just making excuses for your lame security that relies on a secret. Look at history. Your secret will be published, and then your product will be dead.
    • Re: (Score:2, Insightful)

      Comment removed based on user account deletion
      • by jamesh ( 87723 )

        Someone else can get in -- all they need is a little bit of information you've left out (like a key). Obscurity. Right there. Self defeating posts are self defeating.

        If you have the key then all bets are off. But if the inner workings of the lock are completely known to the opponent and they still can't get in without the key then you can say your system is secure. If there is a flaw in your lock such that it is possible to get in without requiring the key then you have to obscure the inner workings of the lock, and you can't say your system is secure because it's always possible that someone could reverse engineer it and find the flaw, allowing them access to _all_ suc

    • "Open Source" doesn't buy you much. Sure, you can see what the program is "supposed" to do. But do you fully understand what the compiler does with it? Do you trust the compiler to be both bug free and non-malicious? I've filed far too many bugs against compilers to trust them to be bug free. Even if you assume they are, what about the compiler that was used to build your compiler? How do you know that the hardware on which the program is running doesn't leave it open to attack?

      If you want "actual tru

  • Missing the point? (Score:4, Interesting)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Saturday October 01, 2011 @06:35PM (#37579898) Homepage

    Well maybe I'm wrong, but I always thought the complaints of "security by obscurity" were not that obscurity couldn't be helpful to security, but that it was a bad idea to rely on obscurity.

    It seems obvious to me that the more complete the attacker's knowledge, the greater the chance of a successful attack. If an attacker knows which ports are opened, which services are running, which versions of which software are running which services, and whether critical security patches have been applied, for example, it's much easier for them to find an attack vector if there is one. You're more secure if attackers don't know that information about your systems, because it forces them to discover it. That takes additional time and effort, and they may not be able to discover that information at all.

    However (and here's the point), it's not a good idea to leave your systems wide open and insecure and hope that attackers don't discover the holes in your security. It's not smart to rely on the attacker's ignorance as the chief (or only) form of protection, because a lot of times that information can be discovered. It's true that "obscurity" is a form of security, but it's a fairly weak form that doesn't hold up over time. The truth tends to out.

    • by jamesh ( 87723 )

      You're more secure if attackers don't know that information about your systems, because it forces them to discover it. That takes additional time and effort, and they may not be able to discover that information at all.

      But on the other hand if you find a back door to a security system, you now have access to all such security systems. Not publishing the intricate details about the security system doesn't add nearly as much security as people think.

      Put it another way, if your security system is completely open and documented and nobody has ever discovered a backdoor that would allow them access without a key, then you can say it is secure with a great degree of confidence.

      • by mrxak ( 727974 )

        Unless there is strong incentive to not reveal knowledge of a backdoor if you find it, such as the desire to exploit it yourself. With open source, you're still trusting the people who really spent the time to look at and understand the code. How many of those people are there? How many of them do you trust absolutely?

  • Past performance IS a proper indication of how the future will be, if everything stays as expected. But reality is rarely fully what we expect it to be.

    Defending against known threats is certainly part of the task of securing something - but the other part is observing what makes up the thing you're defending, and looking for weaknesses, and from that how to react when those weaknesses are exploited. Not doing the last bits is one of the very bad parts of groupthink, complacency.

    One of the best ways to de

  • As a information security professional, I've always seen the whole "security by obscurity" issue somewhat misleading. By repeating the mantra, I feel many people forgot its true meaning.

    Security shouldn't RELY on obscurity. That's true. But it doesn't mean obscurity, by itself, doesn't provide security benefits.

    There are many examples where this is obvious. For example, would you publish your network topography on your public website? Of course not. Even if you were convinced that its security and access co

  • can be unmade by another man

    it's that simple

    the rest is just an arms race to keep one slight step ahead in constant effort and constant motion

  • The entire concept of security by obscurity acts as a justification for keeping secrets. It often sweeps up information whose release will help users much more than it will help attackers. Once it becomes a sanctioned tool of security, instead of an objective of the security, those who set up and maintain the security lean on obscurity like a crutch.

    I realize my argument is an appeal to the slippery slope, but I see it everywhere in society. People, organizations, and governments can get into frames of mind

  • Security thru absurdity is just crazy enough to work.

  • In information security, secrecy does not equal obscurity.

    Obscurity is if I give out access cards for the doors of my building, but all the magic of the card is a single magnet, and just changing the magnetic field at the reader will unlock the door.

    Another example of obscurity: I give out access cards but encode them all to the same code and just tell people this one is only for these particular non restricted zones (this is more like DRM systems).

  • by lowy ( 91366 )

    People, many of your implementation examples aren't "either/or" situations. From a practical standpoint you are usually better with a layer of each: security and obscurity, For example, a strong vault that is hidden is better than the same one exposed. A steganographically-encrypted file is safer than that same file in the public domain. How much safer is open for debate, but you are probably safer with both layers in most individual *implementation* situations.

    Where the debate comes alive is in two main ar

  • What about true obscurity. What kind of OS or software runs on the computers in a nuclear missile silo? Do those computers even use an OS? The point is, with little or nothing published, an attacker who was able to access systems like those would have little realistic hope of hacking them. There's no 0 day lists, no marketplace to pick up working cracks, no books describing how the internals of such a system.

    • by Dr. Tom ( 23206 )
      You are young. You don't know. Eventually they'll figure out the secret, if it's valuable. Your security is flawless if nobody wants your data. You are a script kiddie. Pro hackers can figure out what OS is being used by the way it responds to packets. The point is that if you are relying on secrets like what OS version you are running, then you lose.
      • But the OS in projects like that was probably a one-off written JUST for the application. And the software probably won't RESPOND to most packets, nor support modern networking methods. It's one thing if a true hacker who knows everything had something to work with. But if he doesn't know what computer it is he is trying to hack into is using, and even if he did it he wouldn't be able to find any information about how it works, being a one-off project with the books being top secret...

        I am not saying tha

        • by Dr. Tom ( 23206 )
          How do you know that code they are using is any good? What if some bad guy (or a Russian teenager with nothing better to do) rooted a server somewhere and got the code, and discovered that it's shit? I would seriously be MUCH happier if the missile silos published their code along with proofs that you can't get in. As of now, I assume anybody can get in
          • Yes but if you publish the code + proofs, and the mathematical analysis you used to formulate the proofs is flawed, and an attacker is able to see that but others aren't...Then you have just given him or her the means to break in.

            Same goes for encryption. You can't generally crack an encryption algorithm, even a flawed one, if you only have the encrypted data and plaintext but no idea at all what algorithm was used.

  • by Tom ( 822 ) on Sunday October 02, 2011 @02:00AM (#37581754) Homepage Journal

    Applying game theory is always an interesting approach.

    However, this one misses what I consider an extremely important part: The multiplayer aspect. If obscurity is a part of your defense strategy, you can not cooperate with other defenders. As your are competing with the attacker, that means obscurity is only advantageous if the additional cost to the attacker is higher than the benefit you could gain from such cooperation. In general, your security mechanism will not be so new, innovative and hard to crack that this is true. It does depend on the size and resources of your organisation, though. If you're a large organisation that can keep a secret (say, a secret service), it could have a net advantage. For almost everyone else, though, having more eyes on the problem will generally provide a better solution than the additional difficulty that obscurity provides for the attacker.

E = MC ** 2 +- 3db

Working...