Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Ubuntu Data Storage Linux

Ubuntu Wants To Enable SSD TRIM By Default 135

jones_supa writes "During the first day of the latest virtual Ubuntu Developer Summit, Canonical developers finally plotted out the enabling of TRIM/DISCARD support by default for solid-state drives on Ubuntu 14.04. Ubuntu developers aren't looking to enable discard at the file-system level since it can slow down delete operations, so instead they're wanting to have their own cron job that routinely runs fstrim for TRIMing the system. In the past there has been talk about the TRIM implementation being unoptimized in the kernel. Around when Linux 3.0 was released, OpenSUSE noted that the kernel performs TRIM to a single range, instead of vectorized list of TRIM ranges, which is what the specification calls for. In some scenarios this results in lowered performance."
This discussion has been archived. No new comments can be posted.

Ubuntu Wants To Enable SSD TRIM By Default

Comments Filter:
    • by Anonymous Coward

      acronyms, too.

      Yes. Thank gods for the 'right click Google ... ' feature!

  • by Dega704 ( 1454673 ) on Tuesday November 19, 2013 @07:07PM (#45468191)
    Well it's about time.
  • by Cantankerous Cur ( 3435207 ) on Tuesday November 19, 2013 @07:17PM (#45468249)
    I'm still waiting for for firefox or chrome to make themselves SSD friendly. I know we all have RAMdisk, but I swear, after the OS, web browsers seem to generate the next highest number of 'writes'.
    • by Anonymous Coward

      Who gives a fuck?
      I run "unfriendly" FF, Chromium and Opera on a Samsung 830 here.
      No discard, no tuning, it also contains a swap partition.
      3.2TB written and 98% remaining life after 15 months.

      • I can only assume you're using the SSD life tool or some equivalent software. http://ssd-life.com/ [ssd-life.com]

        In the 13 months I've used mine, I've written 3.8 TB. It estimates the total lifetime for my SSD at little under 9 years. But, honestly, why reduce this number if you don't have to?

        To make a parallel here, properly inflated tires for your car save 1-2% gas mileage. Literally pennies in gasoline per tank. But again, why waste?
    • I swear, after the OS, web browsers seem to generate the next highest number of 'writes'.

      I'd bet a lot of these writes are for caching received HTTP response bodies to disk. Otherwise, desktop browsers in low-memory environments would have to act like Firefox for Android and Chrome for Android. When I open multiple pages in tabs in these mobile browsers, they tend to discard entire pages as soon as I switch to another tab and reload them when I switch back. This interferes with my common use case of opening multiple pages in tabs and then reading them while offline and riding transit. Firefox f

      • Re:It's cache (Score:4, Informative)

        by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Tuesday November 19, 2013 @09:03PM (#45468987) Homepage
        The reason mobile browsers discard pages rather than write them to disk is that they use flash memory. Unlike a SSD which has expensive chips and lots of them, able to spread the writes around and implementing a RAID for speed, the flash in your phones and tablets is a lot more like a microsd card: rather low bandwidth and less useful write cycles.
        • I'm aware of that. But what should I do if I want the browser to write the page to flash memory, such as if I'm about to go offline for an hour?
          • by JanneM ( 7445 )

            Firefox Mobile (at least the tablet Beta) has a "reader" mode for that. You see a small "book" icon in the address bar; it makes a "book" style reflow of the site without a lot of the navigation cruft for easier reading, but if you long-press it, it will also save the document for later offline reading.

          • I use instapaper, pocket or some other offline reader for that.

        • Why do you say that? According to this guy [thinkdigit.com], passmark scores 44MB/s read / 157 MB/s write on the iPhone 5s, which is very impressive. I am skeptical of the strange imbalance though, but according to the actual passmark website [iphonebenchmark.net], the 5s earns 19,288 DiskMarks. I don't know what a "DiskMark" is, but for comparison the iPhone 3G scored 586 diskmarks, so the "disk" in the 5s is 33x faster. For sure it's not just a soldered-on MicroSD.
          • You can now buy SDXC cards that have 90MB/s transfer speeds, so it's not impossible that it's just high speed flash. SSDs on the other hand are capable of 400MB/s transfer speeds, 800MB/s if it's one of the new PCI-e devices found in the new MacBooks.
            • Or if it's an OCZ Revo drive which is also an PCI-e device that's been around for years.

            • A year ago I spent a bunch of money and time trying to move all my data onto SDXC so I could easily move between computers. I bought a Lexar Professional 128 GB 400x SDXC and also the equivalent 128 GB from SanDisk. It was a complete failure. The access time was unacceptable (even in internal readers, not USB readers). But the worse problem was that I had intermittent compatibility errors and transfer errors causing corruption. SD cards are find for what they are made for - intermediate-sized files l
          • by tlhIngan ( 30335 )

            For sure it's not just a soldered-on MicroSD.

            Apple traditionally uses raw NAND flash for the phones - this gives them the advantage in that the controller is all theirs in the SoC and any performance issues likewise are in their control rather than use eMMC which you're dependent on the controller and flash array.

            You can now buy SDXC cards that have 90MB/s transfer speeds, so it's not impossible that it's just high speed flash. SSDs on the other hand are capable of 400MB/s transfer speeds, 800MB/s if it's o

    • by JanneM ( 7445 )

      I put tmp and the Firefox cache on RAM. It really makes an enormous amount of difference in overall system responsiveness. More RAM and an SSD really are a lot more important than CPU speed for general-use machines.

    • Comment removed based on user account deletion
    • I moved all the cache directories of Firefox/Chrome/IE and even the temp directory of OS and all applications point to a cheap USB thumb drive.

      I have not looked around to find out if there is more writes happening elsewhere other than cache/temp...but I guess a super majority is taken care of.

      The above might be the reason the SSD did not 'disappear' when a power outage happened.

      Now, as far as I know the only cache I cannot control is when the virtual machines are booted up - swap spaces remain in
  • to Computer Science graduate? You know, down from kernel hacker?

    What? I still count as a nerd and this IS news for nerds...

    • by Feyr ( 449684 ) on Tuesday November 19, 2013 @07:27PM (#45468321) Journal

      TRIM makes your new flash toys go weeeeeeeee, instead of them going only wee

    • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Tuesday November 19, 2013 @07:42PM (#45468441) Homepage Journal
      Solid-state drives (SSDs) [wikipedia.org] are an alternative to hard disk drives using flash memory instead of spinning platters. This greatly improves read speeds but doesn't do quite as much for write speeds. One reason is that each sector on a solid-state drive can only be erased a finite number of times before it starts failing. For this reason, the microcontroller in an SSD perform wear leveling [wikipedia.org] to spread writes across more physical sectors. TRIM [wikipedia.org] is a feature that an operating system can use to notify a drive that a range of sectors has become unused, which helps wear leveling run more efficiently. A cron job [wikipedia.org] is a program that runs periodically in the background, and Canonical (the publisher of Ubuntu, a distribution of the GNU/Linux operating system) wants to add a cron job that scans attached drives for unused sectors and sends TRIM commands for these sectors. It's possible for an operating system kernel to send a TRIM command for multiple ranges of sectors, but the current version of Linux doesn't know this and instead sends one range at a time. This slows down deleting files because the kernel has to notify the drive of each sector range as the file is deleted. To work around this missing feature of Linux, the cron job will TRIM when a drive isn't busy doing something else.
      • by Lennie ( 16154 )

        This isn't about Linux kernel not being smart enough, it's about crap SSDs that have horrible performance when TRIM is used during normal operations. So Linux can't use it during normal operations.

        • Even on non-crap SSDs it's better to do this in batches rather than in tiny fragments every time a sector gets freed.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      A filesystem just notes which blocks are erased, it doesn't actually erase them. A flash based disk would normally keep maintaining the contents of erased blocks, since it doesn't know which blocks are still in use and which are not. Due to the way flash memory based disks work, keeping the contents of erased blocks causes significant overhead. Flash memory is erased and written in big blocks, so to write just a small sector, an SSD has to read a big block, modify it and write it back. This read-modify-writ

      • A filesystem just notes which blocks are erased, it doesn't actually erase them. [...] With TRIM, the OS can tell the disk which blocks are no longer needed, so that the disk can treat them like empty blocks

        Why couldn't an operating system just write a big block of 0xFF bytes to an unused sector, which the SSD's controller would compress into an efficient representation of a low-information-content sector, instead of needing a dedicated command?

        • by dgatwood ( 11270 )

          Because that would not cause the underlying flash page to be erased, which means it would not improve performance later, when the flash controller runs out of pre-erased pages and has to start erasing pages on the fly during the write operation.

        • Because the drive itself have no concept of filesystem, and wouldn't know what some specific patterns means. A sector full of only 0xFF might mean "I don't need this anymore" as well as "this file have a sector worth of 0xFF stored there". So, no way for the drive to know where there is actual unused space.
          Using trim, the FS/OS/whatever's on the line can tell the drive "ok, this part I don't need anymore, go play with it" in a non-ambiguous way.
          • A sector full of only 0xFF might mean "I don't need this anymore" as well as "this file have a sector worth of 0xFF stored there".

            Either way, when you read that sector back, the drive would decompress it to a string of 0xFF. This is true whether it's an "empty sector" or a row of white pixels in a BMP file.

            Using trim, the FS/OS/whatever's on the line can tell the drive "ok, this part I don't need anymore, go play with it" in a non-ambiguous way.

            What happens when the kernel attempts to read back a sector that hasn't been written since it was last TRIMmed?

        • Why couldn't an operating system just write a big block of 0xFF bytes to an unused sector, which the SSD's controller would compress into an efficient representation of a low-information-content sector, instead of needing a dedicated command?

          Why go to the trouble of implementing a command implicitly when you can implement it explicitly and avoid unintended side effects? Not to mention operating systems would still need to change the way they handle the disk to support the 0xFF method, and it would take up

          • Why go to the trouble of implementing a command implicitly when you can implement it explicitly and avoid unintended side effects?

            Because the explicit command causes unintended side effects in drives manufactured prior to the command's introduction [slashdot.org].

            Not to mention operating systems would still need to change the way they handle the disk to support the 0xFF method

            Any file system supporting "secure" deletion should be filling deleted files' sectors in the background anyway.

            • Re: (Score:3, Informative)

              by Anonymous Coward

              Why go to the trouble of implementing a command implicitly when you can implement it explicitly and avoid unintended side effects?

              Because the explicit command causes unintended side effects in drives manufactured prior to the command's introduction [slashdot.org].

              What in the fuck, you seriously consider some dude's admitted speculation as proof that this is a real risk?

              Here's some vastly more likely semi-informed speculation: Like most modern standards, SATA has many optional features, and standardized discovery methods to inform software what each device can and cannot do. If a drive says it doesn't support TRIM (or, to be more precise, lacks some new capability tag which says it can do TRIM), the OS simply never issues a TRIM command.

              It's like you (and that Immer

            • by cmurf ( 2833651 )

              Any file system supporting "secure" deletion should be filling deleted files' sectors in the background anyway.

              You don't seem to understand the basics of how SSD's work or you would haven't said this. Such secure file deletion doesn't actually work on SSDs. The LBA's overwritten with zeros or random data are written to different, already erased physical pages, while the original pages containing the data are simply flagged for erase. It isn't possible to directly overwrite SSD pages. They have to be erased first.

            • Because the explicit command causes unintended side effects in drives manufactured prior to the command's introduction.

              Such as? The post you linked to explicitly said they were simply guessing.

              On the other hand, compression algorithms do have plenty of weird side effects, from increased latency to randomly varying write speeds to the impossibility of estimating actual free space - because for every bit a lossless compression algorithm shaves off one file, it must add to some other file (because if a bigger

              • by tepples ( 727027 )

                for every bit a lossless compression algorithm shaves off one file, it must add to some other file

                Pigeonhole principle. I'm aware of that.

                Thus some files are actually made bigger than their size implies.

                By about one byte, a marker that the sector is uncompressed. In a real file system, this overhead of one byte per sector is made up for by real files that contain real redundancy, such as the last sector of a file (or the only sector of a small file). On average, the last sector of a file will contain a run of half a sector's worth of $00 bytes. This and other cases where sectors can be compressed allow more logical sectors to fit into a single erase page.

        • every byte from 00 thru ff is a valid data byte. writing what you think is a 'code' is still just data.

          trim tells the drive that there is NO data on this sector. ...if we had 257 binary codes that bit inside an 8bit byte, we would not need trim ;)

        • Why couldn't an operating system just write a big block of 0xFF bytes to an unused sector

          My desktop background is a white .bmp, you insensitive clod!

        • by Lennie ( 16154 )

          The SSD can't treat 0xFF as empty because 0xFF could be part of a file.

          • by tepples ( 727027 )
            Perhaps you missed my implication: Instead of having a concept of empty sectors to begin with, just treat sectors with long runs of a constant value as highly compressible. You need to do that anyway for the last sector of a file, which contains on average half a sector of zeroes.
            • It's not the same because you are forgetting that even sectors with long runs have error detection/correction bits at the end, and a TRIM will make them 0x00 (or technically, SSD trimed pages are written with all bits on, so 0xFF). In order to be able to (likely) quickly reuse that sector again, it must first be erased setting all bits on, then written by turning off selected bits. Writes can only change bits from 1 to 0, which is quick, and does not degrade the life of the sector. Erasing it however, is

        • Because that would require the flash to do deduplication and know that the blocks were full of FF and so could be copy-on-write (and, in your scheme, block-level compression). You're thinking of an SSD as if it were a big RAM chip, full of blocks of flash with a simple addressing scheme. It isn't. It's a load of flash cells, which wear out over time, and a very complicated controller that maps blocks to cells. The point of TRIM is not to erase the block, it is to remove the block from the remapping tabl
          • Because that would require the flash to do deduplication and know that the blocks were full of FF and so could be copy-on-write (and, in your scheme, block-level compression)

            Block-level run-length encoding is exactly what I had in mind. This way more logical sectors can be packed into one 64-128K erase page. A sector that has been compressed into "0x00, then 4095 more of the last byte" is as good as TRIMmed. It needs to be done anyway for file tails.

        • Compressible data should never hit disk. For your scheme to work, the OS's encryption layer would have to detect runs of your magic byte and send them through unencrypted. Much easier to just use an out-of-band method like TRIM.
    • by dgatwood ( 11270 ) on Tuesday November 19, 2013 @07:57PM (#45468559) Homepage Journal

      Quick terminology note: Flash storage is divided into large blocks, commonly called pages (to avoid confusion with disk blocks). Each page contains many disk blocks.

      Flash storage has an interesting property in that you can change individual bits in only a single direction (either from 0 to 1 or 1 to 0, depending on the flash type). To change it in the other direction, you must wipe an entire flash page, which means rewriting the contents of a large number of blocks. To avoid a high risk of a power failure causing the loss of data that wasn't even changing at the time, the flash controller does not do the erase and rewrite in place. Instead, it rewrites the entire page in a different physical location (with an updated copy of the changed block or blocks), and then atomically changes the block or page mapping so that the blocks are now associated with the new physical page. It then erases the original page so that it can be reused during a subsequent write operation.

      This need to erase and rewrite has a side effect, however. As the flash drive gets more and more full, it eventually runs low on pages that can be erased ahead of time, because eventually every block on the disk has had something written to it at some point in the past, even if that block is no longer actively being used by any actual file. The disk does keep some spare pages around, but that only goes so far towards fixing this problem. This means erasing pages during the write operation itself, which is a much slower operation than writing to a pre-erased page. Many of those pages, however, may contain only data that is no longer relevant—blocks from files that were deleted a long time ago. Therefore, if the flash controller could somehow know that it is safe to pre-erase those pages ahead of time, they could be ready to go when you need to write data to them.

      Unfortunately, it isn't practical for a flash controller to understand every possible file system, which makes that somewhat difficult. To solve this problem, they added a new ATA command, called TRIM. The operating system sends a TRIM command to tell the flash controller that the blocks within a certain range are no longer in use by the filesystem, which means that the flash pages that contain those blocks can be pre-erased for fast reuse.

  • by readacc ( 3401189 ) on Tuesday November 19, 2013 @07:29PM (#45468349)

    Windows 7 incorporated TRIM support for SSDs back in 2009. I know the Linux kernel can do it with the right mount options and has been able to for some time, but after a while you just assume distros are setting things automatically as expected (there's very few situations where TRIM is a bad idea, particularly on a desktop-focused distro like Ubuntu).

    There's a reason I still feel like using a poor-man's system when using Linux on the desktop. They just don't think hard enough about automating stuff. Heck, Ubuntu (and no other distro I believe) doesn't enable Wake-on-lan when you shutdown, whereas Windows 7 and onwards does. This is something you have to script in yourself. Why the fuck aren't distros doing things you can reliably expect in commercials operating systems!?!

    • Re: (Score:2, Informative)

      by Anonymous Coward

      I really don't know about windows TRIM support, but It'd better do it only if the HDD supports it. For this it requires HDD specific drivers. Or at least a complete list of drives that support TRIM. This isn't necessarily available to all linux distros.

      About the wake-on-lan thing, I can only say that on lenovo systems, it's possible to take-over the system by wake-on-lan in the default configuration (because you can boot from dhcp/tftp by default). So I'm pretty glad they didn't enable this by default. Soun

      • I really don't know about windows TRIM support, but It'd better do it only if the HDD supports it.

        What happens when the kernel sends a TRIM command to a drive that does not recognize TRIM commands?

        • I don't have the spec in front of me, but my bet is one of two things:
          1) The command is recognized in it's entirety by the drive as being an unrecognized command, and either ignored or reported to the OS as an error.
          2) undefined behavior (This *probably* does not include your hard drive animating and going on a homicidal rampage. Probably.)

          Now, how much do you want to bet that all the old HDDs out there properly recognize the TRIM command as invalid and fail gracefully?

      • If the drive doesn't support it, it just discards the command. There's no reason not to do it. Period.
    • by c0lo ( 1497653 )

      Windows 7 incorporated TRIM support for SSDs back in 2009.

      And, to date, it stays the same: only for SATA drives [wikipedia.org].

    • by Lennie ( 16154 )

      I encounter many Windows systems that don't enable Wake On Lan on almost a daily basis. So I wouldn't be so sure.

    • The "everything turned on by default" concept is part of why Windows is bloated and insecure.

      This is changing, but part of what I like about Linux is that it makes less assumptions about what you are doing and assumes a (at least somewhat) skilled operator. Part of what I dislike about Ubuntu is that it makes too many assumptions about what I want to do. It's also why multiple Linux distros that target different audiences are a good thing.

      The points here are that :
      1. All optimizations or assumptions make

  • by adisakp ( 705706 ) on Tuesday November 19, 2013 @07:49PM (#45468495) Journal

    "As long as that SSD doesn't stall trying to pull blocks off the top of that queue, it really doesn't matter how deep it is. So if you have 10GB of free space on your partition, you only need to call wiper.sh / fstrim once every 10GB worth of file deletions."

    This isn't necessarily true. Earlier Trim will improve the performance of the SSD drive because the drive knows more free space -- more free space allows the drive to 1) pre-emptively erase flash 2) coalesce fragmented blocks 3) more efficiently combine write blocks 4) perform wear levelling operations with less overhead.

    Early trimming can have a similar effect to the manufacturer increasing slack space which increases performances on nearly all SSD's.

  • What the hell reason would it not be enabled by default? I dropped an SSD in my webserver at home a year ago. I just assumed, since osx and windows both support it for YEARS, that forward thinking linux did. Wow.

    Now i have to go check tonight when I get home with this article as a reference
    http://askubuntu.com/questions/18903/how-to-enable-trim [askubuntu.com]

    I am shocked and appalled. We all laughed 10 years ago when M$ said installing linux may damage your hard drive, but in this case its true! What a sad state of affair

    • by Microlith ( 54737 ) on Tuesday November 19, 2013 @08:12PM (#45468633)

      Linux fully supports TRIM and failure to enable it will not damage the device in any way. What will happen is the device will slow down and spend more time freeing blocks as-needed if the drive is increasingly full.

      Of course, if your SSD is your boot drive and you have /home elsewhere, chances are you aren't going to suffer and current drives are significantly faster than older ones (and at their worst, still significantly faster than rotating media.)

      • Linux fully supports TRIM and failure to enable it will not damage the device in any way.

        Linux does not fully support TRIM. It is the very reason why many distros do not automatically enable "discard" in fstab. As noted in the summary: "the kernel performs TRIM to a single range, instead of vectorized list of TRIM ranges, which is what the specification calls for. In some scenarios this results in lowered performance".

    • by Anonymous Coward

      > osx and windows both support it for YEARS

      So you've proven yourself an irrational Linux-hater. Why are you here?

      With my MacBook, here is what I had to do to enable TRIM:

      http://www.mactrast.com/2011/07/how-to-enable-trim-support-for-all-ssds-in-os-x-lion/

      With my work laptop that is a Dell, I had to download a utility from Intel to enable TRIM:

      https://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18455

      Care to apologize for your lies?

      • There is nothing you do special in Windows to enable TRIM support. It is included support directly in the OS and the drivers, automatically. The only time you have to do anything special in Windows for TRIM support is when you're actively using Intel SSDs in a RAID configuration using the Intel Rapid Storage Driver--and even that is merely a driver update, and boom, RAID0 TRIM support passed from OS to driver to device.

        That's it. Under all other circumstances TRIM is automatically enabled and there are no e
        • And TRIM is enabled by default on OS X if you use the stock drives. If you use an after-market replacement then you do need to explicitly enable it (which sucks). With FreeBSD, it's enabled for UFS by default since 9.0 and ZFS by default since 9.2. It is also enabled in software raid configurations by default in 10.0. I'm very surprised Linux doesn't enable it by default.
    • by cmurf ( 2833651 )

      What the hell reason would it not be enabled by default?

      a.) Because the spec was poorly written making the command a non-queuing command, therefore file systems can't just spit out a series of TRIM commands every time a file is deleted because the queue has to be cleared first. This slows down everything, reads and writes. With multiple file systems per drive, a given file system doesn't necessarily know the drive is idle so some other process would need to do the delayed TRIM which is what Canonical is suggesting.

      b.) Some manufacturers have implemented very a

      • With multiple file systems per drive, a given file system doesn't necessarily know the drive is idle so some other process would need to do the delayed TRIM which is what Canonical is suggesting.

        Why would a filesystem need to know? On FreeBSD, the filesystem just spits a BIO_DELETE command into the GEOM layer, and it is up to GEOM to schedule when to dispatch it - it's free to reorder it, as long as it doesn't move it after a BIO_WRITE with an overlapping range. If the filesystem needs to know about the status of other filesystems then that's a serious layering problem. The FS should not be making the decision about whether to send the BIO_DELETE, because it's the responsibility of something low

  • I already setup a cron job to fstrim my drives so this is a welcome addition that will save me a step during new installations.

    • In Windows, when I perform a delete operation, the TRIM command is automatically sent along with the DELETE operation command. No scheduled tasks needed.
      • Adding to this:

        http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx

        "Windows 7 requests the Trim operation for more than just file delete operations. The Trim operation is fully integrated with partition- and volume-level commands like Format and Delete, with file system commands relating to truncate and compression, and with the System Restore (aka Volume Snapshot) feature."
        • by Flammon ( 4726 )

          Hence the performance hit in Windows. Sounds similar to discard mount option [wikipedia.org], something that I didn't want because of the performance hit while I was working. I'd much rather keep my system snappy when I'm using it and trim the drives when I'm not.

          • How is there a performance hit? I get over 500MB/second reads and writes on my Samsung 830 SSDs--each. I've pushed over 1GB/sec when I had them in a RAID0.

            There is a much higher performance hit by not "trimming" your drive. And if this isn't enabled by default, it means a vast majority of Linux users out there with SSDs are experiencing significant performance degradation that they don't even know about.
          • http://opensuse.14.x6.nabble.com/SSD-detection-when-creating-first-time-fstab-td3313048.html

            If you read the source for the information on the 'performance hit' issue, It looks like Windows 7 is not performing the TRIM command in the same manner for which there was a performance hit with the 'mount -discard' option when using ext4.

            "Also, it was assumed 9 months ago that Windows 7 did it that way. But
            since then one of the kernel devs got a sata protocol analyser and
            monitored how Windows 7 is doing it. Not l
            • by Flammon ( 4726 )

              Yes, when you compare it the mount discard option, which is why I avoided it. A daily trim job will however give you much better performance which is what Windows 8 is doing now. Not sure about Windows 7.

              Trimming on the fly, whatever the implementation, will always slow you down more during use than batch trimming when the system is idle.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...