none
“The requested operation could not be completed due to a file system limitation.”

    Question

  • “The requested operation could not be completed due to a file system limitation.”   This is the only message that you will get.  Researching the question indicates the file system (good old NTFS) has a "limitation" such that when the targeted volume is "too fragmented" your read or write will fail.    Can you enumerate the file system limitations?  Can you point to the limitation that was violated so I can avoid doing what the file system doesn't like?  Could the file system muddle along without terminating my program in case a limitation is reached?  How much fragmentation is too much fragmentation?  Will defragging solve the problem?  How can I be sure that fragmentation IS the problem?  OS in question:  Windows Server 2008 R2 with reads and writes over the network via a share.  Simple, sequential writes mostly.

    Monday, September 14, 2009 9:26 PM

Answers

All replies

  • Hi William,

    Please use Disk Defragmenter to defrag the NTFS volume on your side, which should fix this issue.

    Please refer to:

    A heavily fragmented file in an NTFS volume may not grow beyond a certain size

    http://support.microsoft.com/kb/967351

    For the limitation of NTFS, please check the following online guide.

    How NTFS Works

    http://technet.microsoft.com/en-us/library/cc781134(WS.10).aspx

    Hope it helps.


    This posting is provided "AS IS" with no warranties, and confers no rights.
    • Proposed as answer by David Shen Tuesday, September 15, 2009 2:10 PM
    • Marked as answer by David Shen Wednesday, September 16, 2009 1:49 AM
    Tuesday, September 15, 2009 7:42 AM
  • In my researches, I found one poster who knowingly commented that defragging would not help at all.  My approach was to try it and see.  Some two weeks before Mr. Shen added his recommendation, I tried it.  I also refrained from trying to write multiple file streams to the same device simultaneously.  Not knowing what the eventual size of the totally sequential file will be, I must let it grow in small increments, which causes fragmentation when other threads are also writing at the same time.  I guess the flash of insight that lead Microsoft to implement a file system as a complex networked database has its drawbacks.  Once the logical to physical mapping tables overwhelm physical space on the volume, the file system has a fit and shuts down the application with an almost useless error message.  Instead of "file system limitation" it could read "too many clusters" or "overly fragmented."  But, no.    I once was employed to put patches into mainframe operating system under a similar set of circumstances.  The file system was much simpler, but files still could become fragnmented when they start small and grow incrementally larger over time.  I put a patch in the operating system that made the size of the incremental increases 10% of the current size of the file.  That made it less often necessary to defragment the highly expensive rotating magnetic storage devices (Winchester disks).   
    Thursday, September 17, 2009 4:09 PM
  • I found more accurate answer to this problem:

    I was copying a 1 Terabyte file to a brand new formated 5 terabytes drive. Message could not relate to fragmentation. In fact, the volume was flagged for NTFS compression. I unflagged the volume and the copy went well. I remembered that Exchange Database and some other special files, like SQL Server database, could not handle compressed volume and I did not understand why at this time.

    After some thinking, I think huge files compression probably cannot be handled due to mathematical operations involved. It is probably the same for encrypted volumes, as it involves similar operations.

    I was using R2 Server: I am surprised that the message is not more intuitive in such a recent system...

    Thursday, April 01, 2010 9:17 PM
  • I have the same problem in my SSD - I don;t think that I should defrag my SSD. Here us the details 

     

    In my OCZ 240GB PCI-Express SSD, I create huge files and after write about two files 45GB and 45GB the software throw exception "The requested operation could not be completed due to a file system limitation" While there are a lot of space out there.

    I tried to investigate the problem, and some people said it maybe fragmentation problem. I tried to solve it by remove all the files from this partition so append in the huge files will not make fragmentation but the problem still there.

    In this partition I have 40 files, two big (The files I try to creates) and the other files small (meta data) and the there are 6 fragmented files, and total file fragments is 3768243 - This information from defrag /v - the NTFS cluster size is 4KB

    I have Windows Server 2008 R2 - is there is any advice?


    Haytam El-fadeel
    Wednesday, August 31, 2011 10:22 AM