How to defrag large files


  • Hi Guys,

    first up, my apologies if this is the incorrect forum, but it seemed the closest.

    I need to defrag some fairly large files.  These are SQL 2005 MDF files from our SharePoint content DBs.  They're about 25-30 gig in size and currently sit on a partition (D:) that doesn't have enough space to defgrag them in place.  So I copied them off to another partition (E:), ran defrag on the D: partition (which then reported sufficient contiguous space) then copied one of the DB files back.  Then discovered that the copy had fragmented the file into 12,000+ fragments!

    Help! If a file copy is fragmenting the file, how am I supposed to defrag it?
    Will increasing the partition size really help?

    Am I better off splitting the SQL DB across multiple smaller files which could potentially be defraged within the current free space?

    Any ideas?

    • Edited by Craig Humphrey Tuesday, March 02, 2010 3:45 AM defrage -> defrag
    Tuesday, March 02, 2010 3:39 AM

All replies

  • Sorry I am not sure what OS you are running SQL 2005 on.

    The OS versions have evolved in terms of the built in defrag solution. I believe Vista onwards, Microsoft defrag is happy if a file has 64MB fragments. To oversimplify it, a file that is 128MB and contigous is the same as a file that is 128MB and stored in 2 64MB fragments. Getting it contigous will not help too much is the belief.

    So one argument is that maybe your 12,000 fragements are OK if they are all 64MB or larger

    third party products may help

    You might also want to try Microsoft contig.exe - I believe its part of the sysinternals tools Microsoft acquired. Contig.exe can defrag a single file which is what you want.

    If this helps, great! If not, I am happy to continue the conversation in this forum or privately

    Dilip and VHD tools at
    Tuesday, March 02, 2010 5:31 PM
  • Hi Dilip,

    Sorry, it's on Server 2003 R2/SP2 x86.

    Funnily enough, I moved some smaller DB files off (~3gig each) and moved them back (single fragment each, but two very different locations on the disk), then moved the big DB file back and hooraaah it only split it into 83 fragments...

    I had already tried contig, but it only works when there's sufficient space and doesn't work with free-space.

    As for the 64MB fragments, I guess I'd need a better tool to check this, plus it being a SQL DB there's no garuntee that:
    A) the 64MB fragment is on a data boundary (ie it's not splitting up an index or table) and
    B) that the fragments are still sequential so that the drive head isn't "butterflying"

    And yes, I'm ignoring that under the file system it's all sitting on a SAN and I have no idea how many spindals, etc it's spread over (probably just one), or the underlying blocksize.

    It just seemed very odd that what should have been a contiguous write to a contiguous piece of free space, turned into a badly fragmented mess, which would no doubt impact the growth of not just that DB file, but other DB files that co-exist on the same filesystem.

    And yeah, I've been re-tuning my growth settings for SQL DB files...

    Wednesday, March 03, 2010 8:16 PM