none
Question about NTFS and fragmentation of MFT records

    Question

  • Context

    In a small file stored on an NTFS partition, the locations of the clusters in use by the file are stored in the same MFT record as the rest of the information about the file (name, size, last modified, etc).  However, that list of locations can grow to be very large if the file becomes very fragmented.  For example: instead of storing 1 "data run" that starts at LCN #300,000 and runs for 1,000 clusters (which NTFS can store very efficiently), you could (theoretically) have 1,000 data runs, each 1 cluster long, positioned all over the disk.

    Now, as the number of data runs climbs, eventually they won't all fit in that record with the rest of the data.  When that happens, NTFS allocates a second record in the MFT, and has the "base" record point to it (via an ATTRIBUTE_LIST).  As the file/fragmentation gets even larger, 1 record might not be enough, so NTFS will allocate even more records.  When using the default size of MFT records (1k), I'm seeing a max of ~200 data runs on a page.

    The problem

    I'm seeing files that have multiple records allocated to hold data runs (which I would expect).  But instead of the ~200 data runs per record I'm expecting, each of these MFT records only holds a single data run.  In an extreme example, I've got a file with 637 MFT records allocated, all with exactly 1 data run on them.  So instead of taking up 4 records in the MFT, it's using 637.  Which means that when I walk the file, I'll not only be reading each of the pages of data from the file, NTFS is going to have to do an additional 637 reads to find out where the data is.  Ouch.

    My questions

    1. What is happening that causes this to happen to some files and not others?  And even to some parts of a file and not others (I've got a file that has 6 records with 1 data run apiece, and another 7 records that are completely full).
    2. (More importantly) What API can I use to "defrag" these 637 records back to the 4 it should take?

    Things that don't work

    • Using FSCTL_MOVE_FILE to defrag the file will move the clusters that hold the file data next to each other.  But it will NOT cause the MFT records to coalesce.  Intentionally fragging then defragging the file data doesn't work either.
    • "fsutil repair initiate" on an affected file does not cause the records to coalesce.  Presumably the associated DeviceIoControl won't help either.
    • Presumably copying the file, deleting the original, and renaming the copy would work.  But this is not a practical solution.  I need to be able to tell NTFS to clean up the file's records without copying gigabytes of data around.
    • FSCTL_INITIATE_FILE_METADATA_OPTIMIZATION sounds like it might do what I need (from the name).  But unfortunately it is only supported on W10 and is totally undocumented.  I need a solution that works for W7 & up.  Documentation is also good.

    Tidbits

    • I'm seeing this behavior on 2 W7 machines and a W8.
    • The more use the computer has seen, the more affected files there are.
    • Oddly, c:\Windows\inf\setupapi.dev.log shows the problem on all three machines.
    • One of the machines has an SSD, the others do not.
    • The files are neither compressed nor sparse.

    vendredi 18 mai 2018 01:39

Toutes les réponses

  • Note that the MFT is pre-reserved at a percentage of your total disk space. If you completely run out of free space Windows will take some of that back for files. On the other hand, if the MFT gets full Windows will grab another chunk of disk to grow it. The MFT does not shrink as files are deleted, so if Windows had to grow the MFT (you had tons of small files) it may be taking up more of the disk than it did previously. Again, if Windows starts running out of free space it will steal some from MFT reserved space. You can see that space (marked reserved) when you use the defrag tool.

    How NTFS reserves space for its Master File Table (MFT)

    https://support.microsoft.com/en-sg/help/174619/how-ntfs-reserves-space-for-its-master-file-table-mft

    How do you defragment the MFT on an NTFS disk?

    https://superuser.com/questions/316003/how-do-you-defragment-the-mft-on-an-ntfs-disk

    try to use the Microsoft Sysinternals tool: Contig v1.8

    https://docs.microsoft.com/en-us/sysinternals/downloads/contig

    contig.exe c:\$mft

    Please Note: Since the website is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.

    Regards


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    mardi 22 mai 2018 07:35
    Modérateur
  • Thank you for the response. I was beginning to worry no one was going to answer at all.

    Unfortunately, this appears to be a generic "MFT/defrag" response rather than something that addresses the specific question I asked.

    I already know how to use FSCTL_MOVE_FILE to defragment files on NTFS.  There are a number of good resources available for that.  But optimizing the data in foo.txt isn't the same thing as optimizing the MFT record for that file.  While you might like to think that "NTFS just takes care of all that for you," experimentation suggests that it doesn't.  Don't get me wrong: It works.  It just doesn't keep the MFT as optimally stored as it should.

    Specifically: Once a file has allocated additional MFT records to hold information on all the fragments in a file, they apparently never get released.  You might expect that when the defragmentation is complete and the need for those extra records is gone that NTFS would remove them, but apparently it doesn't.

    So I'm not looking for information about how to defragment a file.  I've got that part.  I'm looking for how to compact the MFT record for the file once the defragmentation for that file is complete.  That's what I was hoping the (currently undocumented) FSCTL_INITIATE_FILE_METADATA_OPTIMIZATION would do.

    Information on that control code, or other ways to induce NTFS to compact/optimize the MFT record for a file would be appreciated.

    mardi 22 mai 2018 22:16
  • Window defrag will create more problems for you. The solution <g class="gr_ gr_54 gr-alert gr_tiny gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="54" id="54">i</g> used and <g class="gr_ gr_62 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="62" id="62">recomend</g> is diskeeper18. It prevents files from getting fragmented in <g class="gr_ gr_155 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar only-ins replaceWithoutSep" data-gr-id="155" id="155">first</g> <g class="gr_ gr_154 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="154" id="154">palce</g> so there is no need to defrag and it also proved DRAM caching

    www.diskeeper.co.in

    vendredi 10 août 2018 05:16