Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

CERT And Vulnerability Disclosure 87

Carnage4Life writes "In a radical departure from it's previous stance of security through obscurity, the Computer Emergency Response Team, CERT, has stated that it will fully disclose all vulnerabilities in software that come to it's notice 45 days after the fact whether or not companies have provided a fix. The change of policy can be found at the CERT site and there is also a story on C|net. The change is not a complete embrace of full disclosure because CERT will not release exploits as some other software security watchdogs do."
This discussion has been archived. No new comments can be posted.

CERT And Vulnerability Disclosure

Comments Filter:
  • No, this is typical of someone who likes the fact that fixes for known security problems on my os are within 24 hours, and not weeks. I have to us MS, and help people use it and I hate every aspect of it. I don't hate it because I like unix, I hate it because it's shit, and anything that hurts MS is a step towards me not having to deal with it anymore.
    This is a MS world right now - I don't have to like it.
  • The users/customers need to know of vulnerabilities. They do NOT need the actual exploit tools. Publishing the vulnerability is one thing, providing a tool that actually exploits the vulnerability helps no one but script-kiddies. THEY are the idiots who are likely to actually download the exploit and then immediately use it to do their little, pointless, damage.

    What possible good does it do to give script-kiddies the tools they need to bust systems that, otherwise, they are too stupid to be able to figure out themselves?

  • The whole point of giving vendors some time is to remain a credible partner of the vendor. CERT for the longest time went way overboard with this, but cutting the vendor some slack hopefully keeps the communications channels open.

    Now that CERT is saying to the vendors, "look, 45 days is enough to do something, just feeping do it", I hope that corporate buyers will start demanding real action on structural issues. Take the LoveLetter "virus": if a significant number of buyers would tell Microsoft that they should address the issue of hiding important information from the users so that they could make an informed decision on whether or not to open a certain e-mail message, Microsoft might address the underlying issue and fix it once and for all: not just the e-mail case, but also shared file stores and web sites.

    Sigh. It has surprised me from the outset that there hasn't been a buyers uprising. People overestimate Microsoft's Evil Empire nature and underestimate the delinquence on the part of users to make their concerns known.

  • Exploits are needed because they are a way of describing the security hole in a manner that directly shows exactly what information damage and/or theft can occur. They also often point to ways that a specific security hole can be avoided or closed up before the company in question releases a patch.

    The -problem- with exploits is that they are often too narrow minded. They may take advantage of a hole that is much bigger than the exploit itself indicates...leading to people thinking that if they stop the particular exploit, the hole itself is fixed.
  • by Aigeanta ( 64880 ) on Sunday October 08, 2000 @02:20PM (#722370) Homepage

    While all of you are discussing the ideological and legal aspects of this, I think I would like to address the practical side.

    Very few novice Redhat 6 users, myself included, actively monitor the security problems addressed at bugtraq or securityfocus, out of ignorance or lack of time. However, the Internet is crawling with 5kr1p+ k1dd135 who do, and they have preyed on our system. I do not appreciate the abstract, idealistic attitude of this community, the good-ol-boy mentality that if you aren't an expert administrator, you ought to be hacked by 14 year old malcontents. They used that goddamn wu_ftpd exploit on us and we had to reformat and waste another freaking day reinstalling and upgrading the only OS I've ever personally seen hacked.

    Now RedHat wasn't the only distro affected by this exploit; this is truly an open-source security problem. Consumers will not latch onto Linux if it's this hard to keep secure. There are several items your community needs to address:

    • DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them. Exploits are the equivalent of guns and ammo, and there is a great need for background checks!
    • We need to express leadership in the open-source community to make the distros have secure default configurations, and automatically alert users of security problems, and allow them to choose to install patches. This could be integrated with policies at security sites.
    • Realize the useability and security go hand in hand, and consumers, in the long run, are going to support the OS that gives them the fewest headaches.

    You might not care if all the "dumb" users go away, but you know what? Then your OS won't win, you will always be stuck in nerd obscurity, and MacOSX will be the #1 unix in the world.

  • CERT has had a history of releasing vague reports about a problem fearinf that embarassing the vendor was a bad idea. IMHO, the more embarassed, the better.

    I don't think exploits should be released right off, but upon discovering a security hole, you should immediately release a detailed enough description that an exploit could be written by someone competent in a few days.

    Releasing an exploit is something you do after you feel the bug has been out for long enough for someone to have coded one anyway. I would say, 3-6 months after the bug has been disclosed, but probably not at all if the vendor has a patch. This gives vendors enough time to code a patch, but doesn't let them ignore you.

    I don't think there is anything about actually releasing the exploit immediately that should be actionable under the law. It should just be considered rude and in poor taste.

    That's my personal philosophy, and IMHO, what CERT is doing doesn't go far enough. I don't consider releasing an exploit as necessary to claiming full disclosure. I do consider immediately releasing all pertinenet information to be required.

  • You have no need for a coded exploit - if you can't write it yourself, what chance do you have to understand it? And if you don't understand it, what possible LEGITIMATE use do you have for it?

    Testing whether or not your particular setup is vulnerable or not sounds like a pretty "LEGITIMATE" use of an exploit to me. I shouldn't have to understand every possible security problem in depth and write my own test exploit just to determine whether I'm vulnerable.
  • by arcade ( 16638 ) on Sunday October 08, 2000 @05:08PM (#722373) Homepage
    Full disclosure is the right way to go... WHEN handled sensibly. You have no need for a coded exploit - if you can't write it yourself, what chance do you have to understand it? And if you don't understand it, what possible LEGITIMATE use do you have for it?

    I as an admin have legitimate use for it. I'm able to run the exploit against my box, to check if I'm vulnerable. If its a proper description of the vulnerability in addition,i'll be able to check if the flaw is there at all in my version of the software.

    Exploits is an easy was to check if you're vulnerable and needs a patch. Its helluva lot easier than to check if you've got the updated libs, and if the program is updated, and versionchecking everything.

    Of course, its not foolproof. You may be vulnerable even if the exploit doesn't work. But, if you run redhat, and the exploit is for redhat, then .. :-)

    Furthermore, you say that full disclosure is the way to go. And right afterwards, you say that exploits shouldn't be released. Sorry mac, its not full disclosure if you don't disclose everything. You seem to have misunderstood something.

    For example, while MS didn't improve LanMan until l0pht released l0phtcrack, neither was anybody cracking it!

    And how exaclty do you know that? When l0pht released the informatiom, security minded people were able to patch their systems, because they forced a fix to be made. If they had not publicised the information, you wouldn't know about it. You wouldn't know that you were vulnerable, and if you had a smartass cracker around, he could run circles around you without you understanding what the fsck were going on.

    You seem like a troll, but are modded to 5.. I don't get it.

    The number of people actually capable of discovering new holes AND who are shady enough to exploit them is so tiny that the odds are high an average user will never be affected by them. Most of these people spend all their time coding up "exploits" for skript kiddies today anyway!

    And how can you be so certain about this? You really can't. What is unknown is unknown. You are doing nothing but theorizing right now.

    Btw, as far as I know, slashdot has been cracked once without anyone having any idea on how it was cracked. Furthermore, rootshell.com was cracked about 1-2 years ago. I don't think they've discovered how yet. So, you are saying that the superhackers don't exist, even so, we see this kind of things.

    Keep in mind that your enemies are the skript kiddiez, NOT the corporations or end users.

    I seem to remember som corporations using more than a year in patching some holes. I think they are my enemies, not the scriptkiddies. And I've been cracked by scriptkiddies. If the tools weren't widely published and available, I would never've known what hit me. (maybe i wouldn't have been hit, but that i can't know).


    --
  • Funny, I got an ILOVEYOU just a few days ago but
    I guess they fixed it by now...

    L0phtcrack wasn't the first program to crack lanman packets. It was the first that combined all the known ideas into one package easy enough for a scriptkiddie to use.
  • Consumers will not latch onto Linux if it's this hard to keep secure.

    Consumers have latched onto a series of OSes in the past which are virtually impossible to keep secure. Lamentably, security ranks a long way down the list of priorities in purchasing decisions. Even when it ranks highly, most people lack the expertise to make a judgement about what comprises security -- WinNT says "world class security" on the box -- and that's the level of depth many people are prepared to explore.

    DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them. Exploits are the equivalent of guns and ammo, and there is a great need for background checks!

    I highly doubt this one will ever work -- recent piracy and weak-crypto issues have demonstrated how difficult it is to restrict the flow of information. There are legions of "my leet hax0r expl01t-4rch1v3 hou5e o p4in" sites, and it only takes a single leaking "legitimate developer" before every one of them has a 0-day exploit.

    It's probably more feasible when free information flow is used as an advantage -- ready availabiliy of information and the fix, e.g. "Information wants to be free," whether right or not, is a difficult force to fight.

    We need to express leadership in the open-source community to make the distros have secure default configurations, and automatically alert users of security problems, and allow them to choose to install patches. This could be integrated with policies at security sites.

    No argument with this one -- most vendors are pretty bad about this, because a shiny new installation sells best when it appears to be just bursting with new functionality, and "Install Everything" is still one of the more popular options in your typical installation. Linux distros have an elevated problem with that option because they ship with a ton of software, rather than a skeletal OS-and-GUI-shell which lets you choose whether you want to play Solitaire or not. Though I look forward to the announcement of a vulnerability in Win32 Solitaire.

    So, maybe: the installer offers a cronjob to check the updates site every night, or offers to subscribe you to the vendor's security-announce list. The updater (autorpm, update agent, etc) lets the user pick a notify-only, fully-automatic or run-on-request mode. To gain any acceptance, such an updater must provide anonymity (e.g. autorpm against ftp, debian's apt-get, etc), cryptographic security (e.g. autorpm's use of a gpg keyring with the vendor key) and optionality (see above).

  • "Very few novice Redhat 6 users, myself included, actively monitor the security problems addressed at bugtraq or securityfocus"

    You don't really have to (if you don't run any servers, or server services). You do have to follow your distributors security advices however, but that is the case with Microsoft too.

    With Red Hat, just subscribe to their security mail list, and get the appropriate bugfixes from a local mirror. (just like anybody should do if they run Windows)

    "DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them."

    It is very hard to describe a security problem in words, without, at the same time, giving the recipe for abusing the security hole. Personally I really don't think, that the posted exploits makes any real difference; The stupid Skript Kiddies probably don't use them much, since they prefer "ready-made" exploit & root-kits (hence their name). The smart hackers really don't need them either.

    The routine on BugTraq stems from decades of experience, in dealing with security issues and major software vendors; the problem was, that the vendors _ignored_ security issues, hence the public pressure. Microsoft allegedly have denied a security issue, with the statement, that "it was only a hypothetical security hole", an attitude, that help explains the release of exploits.
    But the point is, that BugTraq actually is on the side of the endusers; Without Bugtraq the skript kiddies would roam free on the unknowing publics computers, and the vendor wouldn't really care.

    Check BugTraq out for fun, and se how most bugreports and exploits, is accompanied by a patch or a work-around solution. Read their FAQ too, to see more about "full disclosure".

    "Realize the useability and security go hand in hand"
    No they don't, and that is the problem (and the reason why the Melissa and ILOVEYOU virus spread like wildfire). Security means restrictions, and people don't like that.

    A lot of the problematic default services in eg. RH linux, stems from historic reasons; people running Linux, where likely to use it, as a server on a intern LAN. But the Linux landscape has changed a lot; the majority af linux-boxes now runs as a desktop pc, with either dial-up, or xDSL. And I wholeheartedly agree, that all the (Linux) vendors, start to make their distros more secure by default, for this large segment.
  • Publishing exploits also expands the knowlege base of how typical exploits work.

    15 years ago most programers would not believe that you can write a program that overflows the buffer of a second program causing it to do something uninteded other than dumping core. Now people understand buffer overflows are a real issue but only because of the huge library of exploits.
  • No, his manager is doing the right thing. Mr. Security Administrator needs the right information to tell Mr. Important Executive exactly what's at risk and what the cost is, and whether immediately running around patching all possible servers is the only way to prudently reduce the risk (versus, say, doing something else, like watching very carefully).
    Without full disclosure, Mr. Security Administrator can't possibly do this. He's reduced to saying "fix everything immediately because CERT says so". This is not a real convincing argument in a world where you have to justify costs and benefits.
  • But they don't.. even then. If someone gets owned they immediately blame the purpotrator (and rightly so) and usually have very little vengence left over for the software company. Most software consumers beleive that "hackers" or "crackers" or whatever you want to call them, make the bugs in software, not the software makers.
  • The recent LOCALE vulnerabilities are not easy to patch on some comercial unixes. They are deeep on the code.

    Also a decent software provider can't release a patch without THOROUGHLY testing it first.

    M$ did that kind of stupidities many times as an example...

  • I don't see as the exploits are needed. Sure, they should release them to those responsible for fixing the software, but there's no need to release them to the general public.

    I would have to agree with this. Further more, by publishing exploits to the general public, you are just giving script kiddies the tools to break into something. Why make it easier for them to break into machines. If they are gonna do it, at least make them code up the exploits themselves. And hey, who knows. That young kid who right now is wipping some exploit up might grow up and turn into a serious responsible kernel hacker.
  • by konstant ( 63560 ) on Sunday October 08, 2000 @11:26AM (#722382)
    So this isn't a "complete embrace of full disclosure" huh? What exactly do you want? Possibly CERT should crack the app or site for you and hand you the root password as proof?

    Full disclosure is the right way to go... WHEN handled sensibly. You have no need for a coded exploit - if you can't write it yourself, what chance do you have to understand it? And if you don't understand it, what possible LEGITIMATE use do you have for it?

    I am always irritated by people who make flip remarks like "security through obscurity is proven not to work", when the basis for their remarks is that some vendors didn't patch known vulnerabilities in the days when STO was more prevalent. In reality, the aim of information security is NOT to eliminate all security holes. The aim is to prevent legitimate users from service interruptions and abuses. It's not that difficult a distinction, guys. For example, while MS didn't improve LanMan until l0pht released l0phtcrack, neither was anybody cracking it! The theory of some full disclosure zealots is that if all vulnerabilities aren't released and coded up within 24 hours of discovery, some shadowy breed of "super hackers" out there will find it in time and exploit it. Guess what - these super hackers DON'T EXIST. The number of people actually capable of discovering new holes AND who are shady enough to exploit them is so tiny that the odds are high an average user will never be affected by them. Most of these people spend all their time coding up "exploits" for skript kiddies today anyway!

    CERT has it right. Disclose the vulnerability to the vendor. Give them A LOT of time to fix it, and a lot of goodwill. Software companies can be slow on their feet - they can't address every problem that crops up in the 12 hours you give them until you announce "they haven't responded". But if the problem is not patched in that liberal amount of time (45 days seems enough to me) THEN feel free to shout from the rooftops and embarrass the suckers.

    Keep in mind that your enemies are the skript kiddiez, NOT the corporations or end users. For some reason it is easy to lose sight of that fact in the world of infosec, where everybody believes they are unusually smart and the companies they correspond with unusually stubborn. I know - I work in that field and ego is a dangerous thing. Don't let it blind you to what should be your real goal - helping people improve their lives.


    -konstant
    Yes! We are all individuals! I'm not!
  • hey, if your manager sucks that's a problem by itself

    there's no way you can blame CERT for that

  • When I read "will fully disclose all vulnerabilities in software that come to it's notice 45 days after the fact whether or not companies have provided a fix", I took that to mean that they would never provide full disclosure before 45 days. Not true at all, they will provide dislosure within 45 days. Big difference. Also (pet peeve, but I wouldn't post if only for this), you spelled "its" wrong.
  • by tmu ( 107089 ) <.todd-slashdot. .at. .renesys.com.> on Sunday October 08, 2000 @11:38AM (#722385) Homepage
    Let me first say that, in general, I welcome this change to CERT policies. CERT announcements have become increasingly less relevant as the bugtraq list has grown and as the SecurityFocus team has added better tools for tracking and researching vulnerabilities. One interpretation of this change is that it represents a desperate cry for relevance by the CERT folx.

    Here's the rub, though: there are some vulnerabilities that cannot be fixed in 45 days. 45 days is plenty to fix a buffer overflow in a single software package (or even a common overflow in a group of packages). It is not nearly enough to fix a protocol weakness.

    An excellent example of this is the SYN Flooding attack perpetrated on PANIX in NYC years ago. Let's rewrite history and suppose that the attack was mailed to CERT first (and not used in public first). CERT then would mail the details of the attack to the security contacts of every operating system at the time (since the idea of SYN queue resource exhaustion was viable on every IP stack at the time). And then those vendors and maintainers would do what?

    Well, fix it, of course, right? The problem is that the fix isn't obvious (it still isn't obvious, years after the attack). You can reduce the SYN queue and time things out, but then you can get in trouble with timing out connections from viable end-hosts. You can use Dan Bernstein's SYN cookies (although it took someone like Dan to come up with these--they are entirely inobvious to the average protocol stack maintainer).

    The problem is that the TCP protocol didn't envisage the presence of an entry on the SYN queue (SYN received, SYN+ACK sent, waiting for SYN+ACK to connect) as a resource that needed to be managed carefully. As a result, there's no easy way, in the protocol, to avoid resource exhaustion for this correctly in all cases.

    In situations like these, 45 days is woefully inadequate. It's not clear if a year is adequate. I like the idea of forcing vendors to respond promptly and get this stuff fixed. I worry about the trend of using the innocents as cannon fodder (as described by Marcus Ranum, whose homepage at http://www.clark.net/pub/mjr appears to have disappeared. anyone know where it is now?).

    Anyway, just wanted to point out that this is not as simple as the shrill FULL DISCLOSURE!!!!! folx are making it out to be.

  • First off, the longest time to break something broken from the get-go award goes to CERT :)

    But glad to see they fixed it, maybe it'll make CERT something more then amusing now. Eg: "You just got that fixed? I mean CERT will be releasing an advisory any day now!"

    Second, for those who don't understand the role of exploits... I worked for a software house as a sysadmin/manager at one point. It took the IIS valnerability and the speed of a working exploit coming out on BUGTRAQ to prove to our head developer that he needed to check for buffer overflows in code, as he previously felt that creating an exploit would be impossibly hard.

    Once again, thanks CERT!
    ----
    Remove the rocks from my head to send email
  • Your apparent "super hackers" do exist. It is solely because they are not script kiddies that they don't show up on your radar. Dude, when a sploit is released it has been floating around the underground for at least six months. In a moment of weakness, the people who own thousands of boxen a year have picked over their list of sploits for the oldest and most common ones and handed over a particularly useless one for some zero-day warez that some pup on irc says he's got. Pup gives sploit to pup and eventually it ends up in the hands of a "white hat" who goes and posts it on bugtraq. This happens a LOT. The problem is that admins don't beleive that there is a bug unless they have been cracked or can prove it to their peers by cracking themselves. Why? Well, getting the source, applying a patch, recompiling, reinstalling, testing, dealing with someone bitching that you broke something because you touched a config file (or an install script did)... etc, is a LOT of trouble and your admin is not gunna do that unless there's a damn good reason. So either make your patches easier (yes, I know binary patches are infeasible) or release the sploit and the patch as fast as possible so these "super hackers" and these "script kiddies" don't get on our boxes.
  • I think people are arguing that someone shouldn't write an exploit. The argument is, you may have the right to stand up on a soapbox, but I don't want you to.

    The government tried to stop encryption technology from getting out because they feared its use by black hats. DeCSS is an exploit, of course... seems like value judgements are being made about people, not principles.

  • bah! Script kiddies don't sour bugtraq for sploits. Sploits are a commodity, they are traded around and are valuable because they are scarce, so no, securing access to security focus and bugtraq is not going to do anything.
  • I would like to commend everyone who responded to my post in a very professional and helpful way. Slashdot is at its best when people like you participate. Sorry if I seem like a troll; I would like to see the open-source community succeed, and I'm just pointing out what I perceive as deficiencies. You have all corrected my mistakes, such as forgetting about putting pressure on the software companies themselves. I'm glad that there is some sort of consensus about having secure installations, and I hope there is some sort of effort underway to do something in open-source that provides the functionality of the autorpm pay service from Redhat. Again, thanks for the interesting conversation.

  • For example, while MS didn't improve LanMan until l0pht released l0phtcrack, neither was anybody cracking it!

    Um.... no one was cracking it that you know of. And, let's see now, if l0pht did it then it's fairly likely that a) someone else would and b) relatively soon. Would it not have been much better for Microsoft to have taken care of this problem before someone wrote a crack? After all, you're only guessing that l0phtcrack was the first... a truly malicious cracker wouldn't advertise their methods, as that leads to people doing something about them.

    Keep in mind that your enemies are the skript kiddiez, NOT the corporations or end users. For some reason it is easy to lose sight of that fact in the world of infosec, where everybody believes they are unusually smart and the companies they correspond with unusually stubborn. I know - I work in that field and ego is a dangerous thing.
    Actually, your enemies are anyone an everyone who threaten the integrity of the data or systems you protect. Certainly, many of these threats seem to come from scrip kiddies, but, to be quite frank, the more serious threats come from the people who are actually capable of creating the scripts (they do have to come from somewhere, you know). Knowledgable, effective crackers do exist, and are a real, if somewhat less prevalent, threat. Even so, don't rule out corporations. The figures for industrial espionage are on the rise, the presumption being that the internet, etc. are providing corporations with another unregulated playground which they can try to abuse in relative safety. Don't make the mistake of focusing tunnel-vision on the script-kiddies... there's more to the world of threats than that.

    As for ego, well, I do agree that it's a problem in our field (in most fields, in fact), but don't forget to include yourself in that bunch. Do I have ego issues? Yes! Hopefully by recognizing that tendency I can prevent it from being a problem, but it takes aknowledging that ego inflation seems to be a natural tendency to which I am not immune. I think you would do well to remember this, as one of the fatal flaws of any security officer lies in the assumption that they are in possesion of the sole correct methodology - an impression I get strongly from your post.
    --

  • It's funny you should mention the TCP SYN attacks on Panix [panix.com], because I actually did E-mail a description of this problem to the CERT [cert.org] a full three years before it was actually used as a denial of service attack. I also wrote to the IETF [ietf.org] main mailing list a more general observation about denial of service attacks, and the need for all ISPs to do ingress filtering of packets based on IP source address in order to have a first approximation of DoS attack source (who you then go and stomp).

    The CERT didn't get it. They did nothing about it until Panix was attacked.

    The responses on the IETF list mostly moaned about the cost of adding all those filters to all those CPE routers, and how ingress filtering would stomp one mode for mobile IP...

    Three years later, people were a whole lot more interested in dealing with this.

  • An excellent example of this is the SYN Flooding attack perpetrated on PANIX in NYC years ago. Let's rewrite history and suppose that the attack was mailed to CERT first (and not used in public first). [...snip...] Well, fix it, of course, right? The problem is that the fix isn't obvious (it still isn't obvious, years after the attack).

    There are lots of problems where the fix isn't obvious. Its design flaws in the tcp, ip, or whatever protocol. SYN-attacks are design-flaws in the protocol.

    I agree with you that syn-attacks, and other DoS attacks, don't seem to have an end. The point is, we cannot actually say that it has been a Bad Thing (Tm) disclosing them. They SHOULD be pointed out. As another one that replied to you said, he wrote a paper about it three years before Panix. Nobody was interested because the problem was only theoretical. It would be to expensive with ingress filtering, it would ruin mobile ip, and so forth.

    The solution to the syn-flooding attacks are of course ingress filtering. THe trouble is that nobody wanted to do that, before the "syn-flooding-tools" existed. And seriously, do you think it would've been better if nobody ever had disclosed it? Do you really think its better to have an extremely weak infrastructure, instead of having the infrastructure going through peer-review again and again, until you find all the bugs ?

    Personally I'm glad the synflood-attack-tools were made publicly available. I'm glad that smurf was made public .. and so forth. Without'em, we wouldn't get ISP's to do ingressfiltering, people wouldn't do anything to try preventing it. Now people at least TRY.

    Oh, I could rant on forever, but I think i'll stop now. :)


    --
  • by arcade ( 16638 ) on Sunday October 08, 2000 @11:35PM (#722394) Homepage
    Very few novice Redhat 6 users, myself included, actively monitor the security problems addressed at bugtraq or securityfocus, out of ignorance or lack of time.

    Or, as it was for me, SuSE 5.1 or 5.2 (don't remember which one) that had the qpopper vulnerability. I was cracked, and afterwards I *love* the resources secfocus, rootshell, packetstorm and so forth has provided for me.

    wu-ftpd exploit

    If an open source program has a security fix , people will run a diff, and find the bug. If you've seen the exploit floating around, they are mostly written by kiddies / friends of kiddies. People that put "DO not distribute" in the top of the comments of the code. The code is of course circulated among 'the eLiTe uND3Rgr0und' - and after a relative short time, it gets onto kiddie-hands, via irc or whatever. They don't need bugtraq for this.

    DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them. Exploits are the equivalent of guns and ammo, and there is a great need for background checks!

    No way. I insist on being able to review the exploits, review the vulnerabilities and so forth. I want to patch my holes, but I want that they're there before I go ahead and patch. Also, the exploits puts a fire in the asses of the developers. It makes sure that they do produce a fix, and fast. I, as a security admin for my company want those fixes asap. I don't want to live months without them because there is a bunch of lazy admins in the world that should "be protected". No thank you..

    We need to express leadership in the open-source community to make the distros have secure default configurations,

    Agreed. Nothing but sshd and auth should be started by default. Everything else than that should have to be specified explicitly, imho.

    and automatically alert users of security problems,

    No way. NOOO way. I don't want the distro to automagically check for anything. That should be made an OPTION to ENABLE, not something that should be forced upon people. NO way..

    *shudder*

    Realize the useability and security go hand in hand, and consumers, in the long run, are going to support the OS that gives them the fewest headaches

    Yup, and therefore distros should be shipped without many daemons enabled by default. The full disclosure policy is not affected by this.


    --
  • I worry about the trend of using the innocents as cannon fodder (as described by Marcus Ranum, whose homepage at http://www.clark.net/pub/mjr appears to have disappeared. anyone know where it is now?).

    Oh, I forgot to comment on this. Are people still taking Marcus Ranum seriously? After his speech on blackhat this year? Of course, one should embrace new ideas and so forth, but hiding the exploits, and not letting the public see vulnerabilities is hardly a new idea. It was the way things worked in the past. And the pasts show that IT DOESNT WORK.

    I don't care how many "innocents" that is used as cannon fodder. I want to be able to make sure that MY system is secure. The same applies to every other security conscious admin out there. We scour bugtraq, pen-test, vuln-dev, incidents, and so forth. I want to be able to secure my system damn it. I don't want the information HIDDEN from me.

    If people don't care about their security. *BAD FOR THEM*. I want to be able to secure MY systems, and I want people equal to me, to be able to secure THEIR systems, without having to wade through a bunch of NDA's, being part of special commitees or whatever.

    Furthermore, how do you think programmers ever will learn to program securely, if they can't follow security lists where exploits are shown and openly discussed? Heh, they should teach themselves magically perhaps?

    blargh, no Ranums ideas are outdated, and really, I lost all respect for the guy during this years blackhat briefings in las vegas.


    --
  • no, sorry. you need to learn your trade. i expect the folks building my house to know if there was a recall of any of the various bits used to build my house. i expect them to know which materials are appropriate to use in the climate i am in.

    in other words i expect them to be skilled.

    and for sysadmins of linux systems i expect that even more because they have a freely available os they can install on as many test boxes as they like. and there are docs up the wazzoo and tons of other resources.

    in other words, *you* did something stupid, *you* need to take steps to fix it.
  • by rwm311 ( 24383 ) on Monday October 09, 2000 @01:45AM (#722397) Homepage
    I've been working for a Linux security company for the past few months, and was pretty much on top of security before that. I can honestly tell you that to me CERT looks like a joke.

    1) CERT is way behind anybody else

    They issued an advisory about wu-ftpd and rpc.statd in July or August when exploits, and proof of concepts, were on bugtraq in late May.

    2) CERT has turned into a laughing stock.

    The funniest thing I think I've seen in a long time is Jamie Rishaw's mock advisory about the Sony Aibo [securityfocus.com]. This is just a slap in the face of CERT.

    I'm not mocking the concept... an entity such as CERT serves a very big purpose. Being associated with the SEI one would think a much more active one. However since white hats are just as skilled as the black hats it doesn't take somebody at the SEI to write an exploit. By the time they do, somebody has already posted it to bugtraq or it's already out in the wild.

    Just my $.02.

  • Of course, not telling exactly what the exploit is doesn't do a thing...all the script kiddies can easily go to one of the other sites mentioned and find out what the problem is and how to take advantage of it. Unless all of the websites decide to collectively not give out the exploits, it doesn't really do any good. HOpefully, the other sites will follow suit...preventing the faulty software from script kiddies.


  • I mean SecurityFocus/BugTraq has been doing full disclosure forever. Why didn't CERT start sooner? And if the k1dd13s want exploits they can get them from SecurityFocus or Packetstorm or something.

    Sometimes you by Force overwhelmed are.
  • by yamutt ( 237300 ) on Sunday October 08, 2000 @10:22AM (#722400)
    They've set up a 45-days after the fact disclosure policy, but they also put a bunch of loopholes in there allowing for later (or earlier) disclosure based on "negotiations" with the affected vendor and also the severity and sensitivity of the hole. So essentially what it says is "we'll disclose holes 45 days after they are reported, unless anyone gives a good reason why not, where "good reason" is solely up to our discretion." Not really very cut-and-dry, when you get down to it.
  • by crow ( 16139 ) on Sunday October 08, 2000 @10:19AM (#722401) Homepage Journal
    I don't see as the exploits are needed. Sure, they should release them to those responsible for fixing the software, but there's no need to release them to the general public (unless the general public is who is responsible for fixing the software, as in Linux).

    Besides, it shouldn't take anyone more than a day or two to write an exploit, anyway.
  • 45 days seems like a long, long time. In a single day, the average kiddie can go through a hell of a lot of computers. Do vendors really need that long to provide a fix, or are they just dragging their heels?

  • Haha, this can't be good for Mickeysoft!
    Open software is no biggie it they did it 45 hours later! MS might not fix a bug after 45 weeks!
  • With 45 days of secrecy, that gives companies a month and a half to put out a patch. This is probably way too long. 15 days to put out a patch is a bit more reasonable.

    Of course, if you consider it as 15 days to put out a patch and 30 days for people to get it installed, then it isn't too bad.

    Personally, I think it should be a two-tiered policy. 15 days of secrecy once the software vendor has been notified. Then, if a patch has been released that fixes the problem, an announcement of "unspecified security problem" with a reference to the patch should be released, followed by an explanation 30 days after people have had a chance to install the patch. If no patch is released, then the full details are released at the end of the initial 15 days.

    Oh well, 45 days is still better than never--much better.
  • Why do they insist on waiting for 45 days while other security organizations (bugtraq, packetstorm, securityfocus) release them immediately? Doesn't this just hurt people who rely only on CERT (like some people i know)?

  • there's no need to release them to the general public

    I think its intended to persuade the company to fix it. 45 days is a reasonable amount of time to fix these bugs as long as the company is expecting that they might have to. It would be irresponsible to leave a security hole for longer than is neccesary to fix it, but many companies are irresponsible.
  • Considering that the vendor companies have limited resources, it's the only way to make most of them take responsibility for their own product's quality. As a member of the InfoSec community, this warms my heart.

  • by --delphi-- ( 131620 ) on Sunday October 08, 2000 @10:24AM (#722408) Homepage
    I think in general when bugs are first found, there should be a small window of time given to the developers to fix the bug. There is no need to publish a bug only to let every script kiddy out there crack your box. 45 days is pushing it way too far. Sure it's good that CERT is going to release more information, but I think that a more realistic set of time is three days. I think I read somewhere in which someone from OpenBSD [openbsd.org] stated that most security bugs can be fixed within an hour if the bug is known. Three days would be plenty of time. I know that some open-source zealots might think that all bugs should be reported immediately, but in truth this should only be the case when it is a true community project such as the linux kernel. Just because something is put under the GPL, does not mean that it doesn't have a main set of centralized developers.

  • There shouldn't be a blanket rule for security disclosure. If some thing is broken in Word fine post that up in 45 days if it not fixed by then the company might get a little bad press. But if there is something broken in a medical database (you notice that medical computers are always used to curry favor in these examples?) and they can't get it fixed in 45 days should the exploit be released anyway?

    I say email the manufacturer, read what they have to say and if it seems like they are just sitting on their butts make the information public. But if doing so would endanger public safty maybe a different rule needs to be applied then "release after 45 days"
  • Time taken to disseminate an exploit world wide on IRC 2min.

    Time taken to fix a security hole on an most BSD and Linux distros 10min-8hrs

    Time taken to fix a security hole on a microsoft distro 1day - never
  • It might not be perfect (45 days IS a long time!), but life's about making progress, not perfection.

    It's good that CERT are making strides in the right direction, as it will force tougher, faster action on the part of companies. Something that those companies have tried to avoid altogether through the new copyright acts, which made them exempt from haveing to make fixes.

    The biggest question that remains, though -- is CERT, SecurityFocus, etc, legal, now? After all, publishing security alerts IS a form of review, unauthorised reviews are illegal, and I can hardly see names like Microsoft or Sun actively encouraging people to publish major security holes.

  • This makes you wonder how many undisclosed vunerabilities are floating around the programs you use every day. I mean, what's securitty if you don't know every single detail about the program you are using. Even with a copy of the source code in hand it's not like you have the time to read it, your never safe.

    Oh well, time to go bury my computer in the backyard.....
  • If they were to release the server logs right after the event took place, and explained what happened, they would be asking for trouble, so it is more likely that they will take a few days to write up the report, during which time a fix could be created. This is an interesting move though, and i wonder if, in the event of a hacker seeing the report, and using it to hack a different companies site, if they would be liable for any damages..
  • Today, if I write a program and it has a bug or design flaw that causes a security hole, eventually someone will discover it. There's a good chance they'll be seeking some fame for themselves and create a lot of bad press for me. There's a reasonably good chance they'll write an exploit easily usable by script kiddies, which is going to make my security minded customers pissed off. If I'm good at PR (like MS) I might be able to direct my customer's frustration at the guy who wrote the exploit or at hackers using it. Even in the best case, full disclosure ends up costing the vendor more time and money than keeping it a secret.

    The point is that with the practice of full disclosure, there's a real opportunity for upset customers and damage to reputation for the vendor who released buggy software..... I'd guess that software vendors would do even less testing and concentrate even less on security if they knew they were free from the risks associated with full disclosure.

    If anything, what's needed is something that punishes software vendors for buggy code.

  • by wik ( 10258 ) on Sunday October 08, 2000 @11:55AM (#722415) Homepage Journal
    I might buy the "within an hour" statement if the disclosure report fully described the problem and a fix.

    Three days is short for a large organization. Somebody in charge needs to be convinced of the problem (managers do have many responsibilities and despite what we might hope to believe, fixing vulnerabilities cannot always be worked into a tight schedule). Then a competent developer needs to be allocated to do the work.

    Most organizations will have to do some sort of quality assurance/testing on all changes to the software. It's irresponsible to not do this. That's another group, another manager, another several days. If you're close to an already-scheduled release date, the fix will probably be bundled in there. It costs A LOT of money to create new installation media, to mail the fixes to customers or to even go through (yet another) group to provide fixes on an external FTP server. In all, 45 days could be quite short!

    Opinion [flamebait] section - it seems that in an open source environment, the testing sometimes isn't quite as disciplined (offset by more frequent releases). An acceptable fix may already exist in the disclosure report because somebody other than the programmer who applies the main source tree fix has been able to think about the problem. This also reduces the time to provide an official fix for this sort of project.

  • by startled ( 144833 ) on Sunday October 08, 2000 @11:56AM (#722416)
    For example, while MS didn't improve LanMan until l0pht released l0phtcrack, neither was anybody cracking it! ... CERT has it right. Disclose the vulnerability to the vendor. Give them A LOT of time to fix it, and a lot of goodwill.

    It's funny that you mention Microsoft during your argument about giving vendors time to fix things. Microsoft won't even fix widely known vulnerabilities, much less things that are pointed out to them and kept private. You're talking about the company that maintained for DAYS after ILOVEYOU destroyed data everywhere that there was absolutely no problem with the way Outlook worked. They had been notified of this problem MONTHS before ILOVEYOU hit, and chose to do nothing. And then after it hit, they still chose to do nothing for a few days.

    Keep in mind that your enemies are the skript kiddiez, NOT the corporations or end users.

    Hmm, none of my friends have been sued by script kiddiez, or been threatened by their lawyers. I'm not saying I'm in love with script kiddiez, but the corporations are capable of doing a lot more damage.
  • by Karmageddon ( 186836 ) on Sunday October 08, 2000 @11:58AM (#722417)
    being against the disclosure of exploits is being against a form of open source, isn't it?

    I'm not trying to be a smartaleck; it seems like an interesting question to me.

  • All vulnerabilities reported to the CERT/CC will be disclosed to the public 45 days after the initial report, regardless of the existence or availability of patches or workarounds from affected vendors. Extenuating circumstances, such as active exploitation, threats of an especially serious (or trivial) nature, or situations that require changes to an established standard may result in earlier or later disclosure.
  • What possible good does it do to give script-kiddies the tools they need to bust systems that, otherwise, they are too stupid to be able to figure out themselves?
    It gives people who run their own systems a very conrete way to tell if they're vulnerable. When I'm not sure if my particular config is vulnerable, I'll grab an exploit and see for myself.

    --
    To go outside the mythos is to become insane...
  • No one's arguing that someone shouldn't write an exploit, or that there should be some way of stopping some from writing or distributing it. What they're claiming is that it's not necessarily wise for anyone to continue to distribute it.

    In other words, you may have the right to stand up on a soap box and say whatever you want. But I don't have to give you the soapbox.

  • I don't see why you say his manager sux. option 1: waste shitloads of money and blow an entire strategy which may make your company successful to fix a bug that may or may not effect you because you can't assess the situation properly or 2: ignore that "potential problem" and be cracked by some juvinile that will probably just install an eggdrop bot on your box but at worst could down your server for two days. Either way there is risk, you have to weigh the risks to make your decision.
  • Heh.. firewalls. Yer right. You know what "firewall" means to most companies? A wasted machine. "Can we use this box to run sendmail?" "Can we run apache on this machine?" "Why can't I ftp to the machine in the server room with all those EFTPOS terminals connected to it?" This is why companies started selling sealed boxes with "firewall" written on them, so people wouldn't think they were real machines. Unfortunately none of these companies can make a secure product (basing them on winnt doesn't help) and the tekniq for getting around firewalls is as long as your arm.

  • bug free code would be nice, but quick response to bugs is better.
  • werd, you are already owned.
  • bah. Writing bug free software is expense, and customers don't care about security. There's a way out of this riddle, but it doesn't involve market forces.
  • OK, I'll feed the troll....

    Very few novice Redhat 6 users, myself included, actively monitor the security problems addressed at bugtraq or securityfocus

    Then perhaps you should. Heck, if you don't have the time to wade through Bugtraq, then subscribe to your distro vendors security notification list. They're typically only a day or two behind us. Or install our pager app, and configure it to mail you when one something you're running has an advisory released for it.

    Or if you just refuse to watch for patches, then run OpenBSD. It's not totally perfect, but it will allow to you be a lot less vigilent than most other OSes.

    Or do like most of the world, and wait until you get nailed. When you recover, apply all the security patches at the time as well, while security has your attention.

    DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them.

    Exploits serve many more people than just the attackers, and there is no such thing as a list of "legitmate developers".

    Realize the useability and security go hand in hand

    Actually, they're quite in opposition.

  • You don't seem to get the basic idea behind full disclosure security, and moreover, you fall into all the classic pitfalls of "security through obscurity". First off, you don't really know what crackers are capable of, do you? Do I? no I don't. I have never met a true expert cracker, and have only met a couple script kiddiez on irc. I short I don't really know what crackers are capable of, but most importantly * *I don't assume that I do * *. When you start to make blind guesses about a sucurity risk, you open yourself up to becomming overconfident, and you make yourself vulnerable.

    Second, you say "You have no need for a coded exploit - if you can't write it yourself, what chance do you have to understand it? And if you don't understand it, what possible LEGITIMATE use do you have for it?" but without exploit code how does one test to see if a security hole has *really* been fixed, if you can't test it?
    Code is the langua franca of the open source movement, code is often the best possible way to demonstate a security hole to programmers, and more inportantly, what mistakes other programmers should try *not to make*.

    Third, and main reason that I agree with full disclosure, is the idea that sucurity comes when all the good guys have every possible tool at their disposal to secure their networks. I don't want company A (cough Netscape) or B (cough Microsoft) should sit around debating if they should release information about security problems, or trying to pretend problems don't exist when they do. I want the businesses that I work with to trust me, and to let me be the judge of how vulnerable I am.
  • So, you'd rather have them all using private tools, rather than the public ones that everyone can get their hands on to examine, and write IDS signatures for?
  • I like the 45 day and then disclose idea because it's kind of like MAPS blackhole list. I'd like it even better if there weren't any 'negotiations' involved, but whatever. It's Carnegie Mellon, so it must be cool. =)
    --
    Peace,
    Lord Omlette
    ICQ# 77863057
  • by imp ( 7585 ) on Monday October 09, 2000 @07:37AM (#722430) Homepage
    there's no need to release them to the general public
    I disagree. The public at large must have some way to test to see if their systems are vulnerable to attack. The easiest way to do this is by running the exploit to see if you are safe or not. This is especially true for systems that aren't too popular today as their might not even be a vendor for the product any more (When was the last time mips hardware company issued a bug fix).

    Developers of similar code also have a keen interest in the exploits. As do writers of secure code. When the exploits are analized, often they relveal areas of exploration which heretofore might have been thought completely safe, or whose danger was unknown. Executable stacks and printf format strings come to mind here (a hardware problem and software sloppiness).

    Besides, it is just against the freedoms that this country was founded upon to restrict speech.

    Warner Losh
    FreeBSD Security Officer

  • DO NOT post exploits to the general public; insist that securityfocus, bugtraq, and others only allow legitimate developers to view them. Exploits are the equivalent of guns and ammo, and there is a great need for background checks!

    No way. I insist on being able to review the exploits, review the vulnerabilities and so forth. I want to patch my holes, but I want that they're there before I go ahead and patch. Also, the exploits puts a fire in the asses of the developers. It makes sure that they do produce a fix, and fast. I, as a security admin for my company want those fixes asap. I don't want to live months without them because there is a bunch of lazy admins in the world that should "be protected". No thank you.

    No. Make an exploit. send it to to developer. Publish it on bugtraq only if the developers don't respond. exploits are ammunition. handle them with care.

    Having developers/distributions distribute fixed versions before exploit gets wild is win for everyone.

  • No. Make an exploit. send it to to developer. Publish it on bugtraq only if the developers don't respond. exploits are ammunition. handle them with care.

    Of course notifying the developer first, and give him a week or two to make a fix is a nice thing to do, as long as the exploit isn't already found in the wild.

    But, it should always be posted to bugtraq when a fix is issued. Both for education and so that admins may test their systems.

    Information about the holes should be made public, including the information on how to exploit them.

    Having developers/distributions distribute fixed versions before exploit gets wild is win for everyone.

    That depends. Personally I really like the idea of pushing closed source developers a bit by publishing the exploit before contacting them. It makes an incentive to open source it. If it had been open source, you would've published a patch along the exploit.

    I'm sure those who use closed source solutions will disagree, ;)


    --
  • i agree with some of the commentary here. this isn't a terribly crucial exchange. there oughtta be full disclosure to those whose responsibilities lie in the correcting of bugs in software, but joe schmoe public needn't have access to the info. until they allow full disclosure (at least in part to some relevant group) there's not much to discuss here.

    1. The Meaning of Life [mikegallay.com]
  • by Anonymous Coward on Sunday October 08, 2000 @10:31AM (#722434)
    That's right folks. It should be illegal to post cracking tools and exploits. Why? Well because it's damaging to eCommerce because crackers will use it. Besides most of this stuff could be considered illegal because of reverse engineering which is indeed banned under the DMCA which, before you start bitching, is a morally just and much needed law to protect large corporations. So just shutup and eat at McDonald's and wear Nike shoes, and drink Coca-cola, and listen to Britney Spears, Nsync, or whatever other trendy music the RIAA wants you to listen to. Remeber, you don't have the right to choose what you do with your so called property or even your body so just live with it.
  • by TarPitt ( 217247 ) on Sunday October 08, 2000 @10:31AM (#722435)
    Knowing the detail of a vulnerability enables real risk assessment. How much of your systems are at risk? What's the value of what's at risk. It's impossible to figure this out without some notion of what the vulnerability is.
    Security Admin: "We need to immediately upgrade all our servers to fix a serious security bug?"
    Executive With Money: "What's our exposure if we don't?"
    Security Admin: "I don't know. CERT just says to fix the bug."
    Executive With Money: "And we are supposed to pull people off our Vitally Important Marketing Strategy Project to immediately fix a security problem, when we don't know what the problem is and what it costs us?"
  • by G Neric ( 176742 ) on Sunday October 08, 2000 @10:32AM (#722436)
    It's not that BugTraq has a "policy" of releasing exploits. BugTraq releases what gets submitted, through moderation. The people who submit each report or exploit decide how they want to handle it, and this is how it should be. Freedom of speech means if I have something to say, I can say it.

    A lot of the "early" reports are motivated by the desire for credit for finding the bugs. It might seem petty and small minded, but so what? If that's your motivation for putting in useful work and publishing, who are we to criticize: go to it. If the companies/developers that have the security hole in their products enhance the glory for the discoverer, they might get more cooperation.

    But early reports with exploits really light a fire under the fixers and create more awareness in the vicitms and potential victims, so in the long run it's a good thing, IMHO. But, MHO opinion doesn't matter: as I said, it's all about free speech.

  • You're forgetting thespeed that a monolithic organisation moves at. They have to find someone who was working on the affected area, take them off the project that they're working on now, do a lot of fiddling with version control systems (Which often break through corporate ineptness even though they shouldn't) and get them to fix the bug.

    With GPL and BSD code, you will get a lot of security experts looking into it immediately. A large company can't afford that.
  • This change is needed. There have been too many cases of vendors and users burying their heads in the sand about vulnerabilities. In practical terms, the threat usually exploits software bugs. not weaknesses of existing security mechanisms. The lack of vendor liability for errors and omissions in their software has meant that security-related bugs have been fixed only grudgingly. The UCITA has been a step backward; perhaps this will be a step forward.
  • This fairly dumb, but struck me as funny.

    1) Unauthorised reviews are illegal.
    2) In order to bring suit, MS writes a statement that SecurityFocus is publishing unwelcome bug reports, an illegal review of their product.
    3) SecurityFocus never gave MS the right to issue a statement about the SecurityFocus product/service.
    4) Therefore MS is breaking the law it's trying to prosecute on, just by bringing the suit.

    Sorry. I'll be quiet now...

    Drake42

  • by Anonymous Coward
    Given that these folks are in the business of selling software, I don't have a lot of sympathy for them when they do a lousy job and lose a little (or a lot) of customer goodwill and credibility.

    The frustrating part is that the customer ultimately loses the most when an exploit is used against software they're running. In a way this is good - the more they suffer, the less tolerant they will be of insecure software - hopefully putting the vendor out of business or causing them to change their ways. Of course, in the meantime, they pay the price.

  • by nullset ( 39850 ) on Sunday October 08, 2000 @10:47AM (#722441)
    CERT is no longer the "Computer Emergency Response Team."

    According to their FAQ [cert.org]:

    CERT" does not stand for anything. Rather, it is a registered service mark of Carnegie Mellon University.

    Its history, however, is that the present CERT® Coordination Center grew from a small computer emergency response team formed at the SEI by the Defense
    Advanced Research Projects Agency (DARPA) in 1988. The small team grew quickly and expanded its activities. As our work evolved, so did our name.

    When you refer to us in writing, it's OK to refer to us as the CERT® Coordination Center or the CERT/CC. Although you should not expand "CERT" into an acronym, it's appropriate to note in your text that we were originally the computer emergency response team.
  • Why the crazy sig? I'm a security engineer. Crabs are the favorite food of octopi, and if you put one into a tank with an octopus, it's soon dinner. The method the octopus uses to catch the crab is very much like how a hacker usually breaks into a system--he looks for design flaws and uses social engineering, avoiding the security mechanisms themselves. So I am in the business of ...
  • This is on a case by case basis already. CERT will delay (or speed up) the release of exploit details based on talks with the organization maintaining the software.
  • Who needs CERT then? Just tell company directly.
    Get the credit yourself. =)
  • Evil thought: This might be good for small companies. A scary CERT advisory might actually get more press attention than a small company could afford otherwise (none).
  • The sysadmins who need to be able to identify the security level of each components will be exposed to the 45-day black period. IMO, it is way too long. Sometimes, just knowing the nature of a security hole is enough to find an appropriate workaround and this doesnt have to wait 6 weeks.

    I don't understand why they have chosen that 45 days magic number. It depends so much on the security hole.

    I (with others) maintain an open-source software (delivered without any warranties). However, if I find out a security problem in our software, as a responsible maintainer, I would announce it immediately on the mailing list/web site and fix it as soon as I can. Unfortunately, you cannot expect this behaviour from major vendors or else, they wouldn't have put pressure on the CERT to remain protected for this period.

    So my point is: "protect people, not vendors".

  • "Dumb" users have an advantage of "Security by no access".
    Basicly a server is granting limited access to the world. Even if that server runs only to deliver data to 3 people around the world it must grant enough access to everyone that it may verify the identity.
    Any time you grant limited access there is a danger of a defect granting unlimited access.

    "Dumb" users generally don't run server software and don't grant any access to start with.
    They only run CLIENT software. Unless the client dose something amazingly stupid (like run programs as part of e-mail or wordprocessor files or use wordprocessor files as e-mail) the user is generally safe from harm.
    (People or companys who ship client software that allows such things to happen shouldn't be trusted to write ANY software.)

    If you run a server (like NcFTP) you really should know what your doing and read the BugTraps etc.
    If your just a "Dumb" user then just use clients known to not do stupid things (like run scripts in e-mail, word processors or web browsers... or use word processors as net clients)

    DeskTop Linux systems really should be devoid of server software as it's an unneeded security risk.

    If you arn't an admin you shouldn't act as one...
    If you are going to run server software you need to take full responsability for this...
    There is no such thing as a "User friendly" server.

    On this note... MacOs and Dos are the two most secure Internet operating systems.
    This becouse they come shipped Internet UNready..
    (Dos dosn't need Windows for Internet access.. just an Internet Network driver)
    The only software running is the software being used.
    Linux distros generally come with a bunch of servers installed by default and that is bad news for a workstation...

    From my side...
    This is the game of keepping crackers in the dark.
    We tryed this in the 1970s... The result was techs who didn't know enough about security to protect themselfs against crackers.
    This gave crackers the image of "All powerful hackers" in the 1970s.. but in the 1980s the reality came through as it was just a matter of not doing some really stupid things.
    The techs didn't know thies were stupid things becouse they weren't talking to each other hoping to keep crackers in the dark. In the end only the techs were in the dark.

    That is the problem here. In trying to keep the script kiddys in the dark you WILL keep they techs in the dark. That dosn't mean you'll keep the crackers in the dark.

    If you publish NOTHING you expect the defect will be fixed before a cracker will discover it. The chances of this are increadably small.
    If you publish the fact that the defect exists "I" can remove the offending software. The crackers can publish an exploite for script kiddys and a bugfix will take an unsuaully long time as most of the good guys don't have an exploit to work with.

    Or you can publish an exploit...

    "Dumb" users as a rule don't have anything to worry about.
    They don't run the kind of software that makes cracking posable...
    "Dumb" in quotes BTW becouse.. they aren't dumb just not techs. Thats gotta be reasonable.
    I think however the casual user should know as much about computers as they do about cars.
    Not enough to build one from ground up or how to fix an engen.. But enough to know to put gass in the tank and change the oil.
    Yes the avrage user should know if a pacage he is using has a defect. But for now the News media dose that job pritty nicely... Malisa, and "ILoveU"...
    They need not worry about defects in NcFTP.. as a rule the avrage user shouldn't be running NcFTP to start with... When they are.. thats the problem to fix...
  • Why does the Slashdot community think this is obscurity? Just because someone doesnt disclose their security bugs doesnt mean jack. Everyone here can still look at the source. If you want to see security holes, go look at them for yourself.

    These people are just trying to keep a secure system. You cannot do that very well by just telling all the script kiddies how to exploit your own product. ESPECIALLY when the other venders do not have a patch to fix the hole.

    I personally commend them for waiting 45 days. There is no need for immediate anything. All this will do is add more script kiddies, more downtime, and LESS security. Youall seem to want the greatest security possible. Well I can tell you that if you leave yoru front door open, its not gonna do you jack shit.
  • >unless the general public is who is responsible >for fixing the software

    Who do you think your system administrators,
    network managers, stock brokers, insurance agents,
    etcetera, are? It is the General Public who need
    to know about these things, whether they can fix
    them or not!
  • but you forgot the first amendment makes 'unauthorised' reviews legal.

    In the sense that you can't go to jail for it, yes. In the sense that you can't be sued into poverty, NO. That's the problem with the civil courts today, any big company can pretty much wipe you out at will unless your case can capture enough public attention (a big gamble vs. just shutting up).

  • by pb ( 1020 ) on Sunday October 08, 2000 @10:58AM (#722451)
    First, if there's a hole in a product that has been found, the company has been notified, and nothing has been done about it for 45 days, I'm sure someone has and is using an exploit by then.

    Second, by publishing the details, and saying "hey, you knew about this for 45 days; don't you think your customers should know?", you're encouraging software companies to get their act together.

    If left to themselves, no, they won't fix bugs out of the goodness of their hearts. The only people this sort of thing affects are the consumers; big businesses have firewalls and probably use better, more expensive products internally, wherever possible.

    I think it's pretty sad that it has to be this way, but that's the way it is. Taking a laissez-faire attitude to big software companies doesn't seem to work, because there is too much potential for abuse.

    I also don't like the whole "unauthorized negative reviews aren't allowed" business. Who cares about freedom of speech, eh? We can have unauthorized biographies of Bill Gates, and not unauthorized reviews of Front Page. Whatever you say, guys. For example, I did a lot of benchmarking between DOS and Linux's DOSEmu; my findings at the time were that DOSEmu was about 3% slower in raw CPU than actual native DOS (testing using the DOS 32-bit BYTEMarks), but that defragging a native DOS partition was *much* faster under DOSEmu, due to Linux's cache subsystem. Those are the facts; why should they be censored just because someone else isn't happy about it?
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • pfft
    45 days is a looong time.
    Just think of all the things hax0rs can do to your server in 45 days.
    At least if you knew about the exploit you could prepare yourself, even in a closed source environment where it takes a while to get a patch.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...