none
Win2k8 Server Disk Defragmenter

    Question

  • The versions of Disk Defragmenter in Win2k3, Win2k and WinXP provide a report indicating what files are defragmented and the extent of that fragmentation.  The version in Vista and Win2k8 do not appear to have this anymore.  Is this something that was taken out, or is there some setting I am missing?  Quite honestly, I would really like to know what I am getting myself in for, if I need to run this on a database server.

    TIA,

    Greg Wilkerson
    Monday, November 03, 2008 9:34 PM

All replies

  •  

    Hi Greg,

     

    Advanced users can use the command line tool Defrag.exe to generate detailed reports and perform other advanced tasks.

     

    For more information about Defrag.exe, please refer to the following articles:

     

    Running Disk Defragmenter from Command Prompt

    http://support.gateway.com/s/software/MICROSOF/vista/7515418/7515418su266.shtml

     

    Get Detailed Statistics By Running Windows Vista Disk Defragmenter From The Command Prompt

    http://www.watchingthenet.com/get-detailed-statistics-by-running-windows-vista-disk-defragmenter-from-the-command-prompt.html

     

    Hope it helps.

     

    Tim Quan - MSFT

     

    Tuesday, November 04, 2008 4:01 AM
    Moderator
  •  Thanks Tim,

    But, that really doesn't help at all.  Ultimately, I need to know how many fragments are present in my database files.  The comment about "file fragments larger that 64MB" concerns me, too.  Database file fragments are almost always greater than 64MB is size, many in the GB range.  Defragmenting a database file system is a time consuming, performance degrading process and must be a planned and coordinated event.  So, knowing the fragmentation status of the database files is important.

    I guess the -w and -a options are mutually exclusive.  I hoped that using that option would address fragments > 64MB.  No joy!

    Will the older defrag utility run on w2k8/vista?  I'll try it on my Vista machine.

    Thanks again,

    Greg
    Friday, November 07, 2008 1:02 PM
  • Sorry Tim,

    You can't get off that easy.  If you can't answer the post, leave it open for someone who can.

    Thanks,

    Greg
    Monday, November 10, 2008 2:10 PM
  • Hi, Greg

    Why do you need to know how many fragments are present in your database files? What are you trying to accomplish?
    I'm asking because maybe there is an alternative way, if we know why you need to know the number of fragments...

    many thanks
    --Malu
    Malu Menezes
    Messages in this forum are provided "AS IS" with no warranties
    Storage Team at Microsoft
    Wednesday, November 12, 2008 8:20 PM
  •  Hi Malu,

    What I am trying to accomplish is to actively manage the database file fragmentation that results from either planned or unplanned data file growths.  A file defragmentation for database files is not something that I care to set up to run automatically.  The defrag process is very disk intensive and database response plummets when those run.  I typically have to schedule a maintenance window when defrags are necessary.  With the database file fragments typically being GBs in size, the amount of time required to complete the defrag can be extensive.  A defrag is not necessary if the file only has a few, I don't know, say 5 fragments.  But, if it has 10 or 20 or 50, I do need to defrag it.  That is where the reporting of the old defrag utility was valuable.  Part of being a DBA is actively managing potential performance issues.  This is one of those.

    Also, can you address the comment from the defrag utility about fragment > 64MB.  In virtually all cases, the database file fragments are going to be larger the 64MB.  

    Please refer me to the appropriate technical articles if my thoughts are misguided or if SQL Server 2005 and 2008 is somehow less sensitive to database file fragmentation.

    Thanks,

    Greg

     

    Thursday, November 13, 2008 3:14 AM
  • For the Microsoft guys that are listening, are the any updates to this?  Is there a reason I should not care?  Is there a reason why I should just trust the defragger to run on its own and be assured that when it does decide to run, it will not bring my database response to a crawl?  Is there a reason why the reporting capability was removed?

    I'm just looking for some answers.

    Thanks,

    Greg Wilkerson
    Sunday, November 23, 2008 9:11 PM
  • I am listening, and while I am ex Microsoftee, what I am saying here does not represent any official Microsoft position

    The line of thought is that defragging runs/fragments/pieces of files that are > 64MB in length provides very limited benefits. Assume that you have a database file stored as 64MB, SomeOtherFile1, 64MB, etc. If the database is being read/written in say 4kb chunks, you will get quite a few reads/writes done sequentially with the first 64MB run, then need some disk head repositioning, then do quite a few  read/writes without disk head repositioning. Avoding this one disk head repositioning can cause quite some I/O and possibly some application downtime if the I/O is deemed too disruptive

    I dont think SQL 2005/2008 are any less sensitive

    Does this help?
    www.msftmvp.com
    Thursday, December 11, 2008 5:52 AM
    Moderator
  • Hi Dilip,


    Interesting line of thought. I can see that point, to a point.  I do have concerns about database files that may have a LOT of fragments.  Especially large databases used for reporting where processes execute 100's of thousands of reads.  To appease those concerns, I will have to put some numbers together.  But, as the current tool is implemented I still have no way of knowing and that is disturbing.  Why hide the fragment data?  What possible advantage is provided to the customer by removing this? Now I, and my customers, have less information available to make intelligent decisions from.

    Thanks,

    Greg Wilkerson
    Sunday, December 14, 2008 2:44 PM
  • Download the Contig.exe commandline utility from SysInternals.

    It will allow you to analyze and defrag individual files, including online SQL Server database and log files.

     

     

    Friday, November 26, 2010 5:48 PM
  • Greg,

    You need to defragment INSIDE SQL server databases not use an external version.


    MCP, MCTS, MCSA,MCSE
    Sunday, March 06, 2011 2:59 PM
  • Greg,

    You need to defragment INSIDE SQL server databases not use an external version.


    MCP, MCTS, MCSA,MCSE

    Twilliamsen, I believe you're talking about database fragmentation of indexes etc. Greg is talking about File System fragmentation. I'm facing the same problem taking over some SQL servers where the DB's were set to default growth size AND autoshrink.. :----(

    Greg, i'm using Contig.exe from Sysinternals now and so far so good. I agree that MS made a horrible decision to get rid of the details in the Defragmenter. With the new Windows system, it seems like the trend it to hide EVERYTHING.. menus, options, etc. Admins like to see what's going on, we like reports, graphs, etc. I hate when they decide for me what is important or not, whether i'm right or wrong.

    Thursday, April 12, 2012 8:52 PM
  • Paul,

    You are correct.  I have a handle on index/table fragmentation.  The physical file fragmentation was my concern.

    Greg

    Thursday, April 12, 2012 9:00 PM
  • Greg,

    Have you found a tool or a solution?

    I'm considering alternatives such as a restore or copy of the files to a contiguous block on the disk... This may seem pointless now with the contig.exe as i can defrag the individual files. But the thought of "transplanting" the file to an "open space" seems attractive :)

    Friday, April 13, 2012 12:58 AM
  • Paul,

    I actually like your "contig.exe" solution.  My only recourse would be to do what you say.  Restore the database and hope the physical fragmentation is reduced.

    With the advent and acceptance of SSDs, this is all pointless.  I'm starting to do more and more SSD implementations and when using those drives, fragmentation doesn't seem matter.

    This is off subject, I've yet to try to defrag an SSD.  I'll be that's lightening fast!  I'm going to try that today!

    Greg

    Friday, April 13, 2012 11:37 AM
  • You should probably hold off on those SSD defrags.  SSDs store data across multiple NAND chips and locations, basically be design the drive will be fragmented.  However, unlike your standard mechanical drive, there is no read/write head to move and "search" for the data so defragmentation is not much of a concern on an SSD drive.  Also, on top of it not making much of a difference, a NAND chip cell is only good for a certain number of write cycles.  Defragging an SSD will in effect reduce the lifespan of the drive.

    Tuesday, March 11, 2014 5:36 PM