none
Undeletable files on Cluster Shared Volume

    Question

  • Hi, I somehow managed to create two files called "-d" and "-e" in the root of a Cluster Shared Volume. Not 100 % sure, but I think I specified "-d" as parameter to an application which took it as output filename.

    In any case, now I have these two files, and I can not delete them, I cannot move them, I cannot take file ownership, I cannot modify permissions. Regardless if I'm local Administrator or SYSTEM. Even after reboot of Hyper-V hosts, and after moving CSV ownership.

    Explorer shows attributes AE (Archive + Encrypted):

         

    As a Domain Admin in an elevated command prompt, I can not do anything with the file:

         

    Same as the System Account:

         

    Powershell even fails to enumerate the files:

         

    I moved CSV ownership between nodes and rebooted all cluster nodes, but the problem persists.

    Cluster runs Windows Server 2012 R2, the volume is on a Dell Compellent SAN, attached via FibreChannel.

    Is there anything I can do (besides migrating all VMs to a new volume)?

    Thursday, March 23, 2017 12:00 PM

Answers

  • I still had no time to work on this, but obviously there are two options:

    A) Shut down VMs using the disk, remove those from cluster, remove disk from CSV, delete files, re-add disk to CSV, add VMs to cluster, power up VMs

    B) Create new SAN volume, live-migrate all VMs using the disk to the new volume, remove problematic volume

    • Marked as answer by svhelden Thursday, March 30, 2017 7:44 AM
    Thursday, March 30, 2017 7:44 AM

All replies

  • Those filenames are unusual, but they're not invalid.

    Your problem appears to be with the encryption. Are you logged in with the same account that set the encryption?


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 12:43 PM


  • Your problem appears to be with the encryption. Are you logged in with the same account that set the encryption?

    I never set encryption ;) .. But I am logged in with the account that created these files.

    (Normally with encrypted files, I could still take ownership and see which certificate would be required for decrypting.)

    Thursday, March 23, 2017 12:50 PM
  • I'm not saying that you set encryption, but the encryption flag is still set.

    You know that you have permissions and you know that the filenames are workable. So what's left?

    I haven't worked with EFS in such a long time that I can't speak to the extent that it will affect this situation. IIRC EFS gives an access denied when someone other than the creator attempts to delete an encrypted file, so that seemed like the low-hanging fruit.

    If it's not encryption, then the next most likely probability is that it is open in a process. Usually, the system allows you to rename a file even if it's in use. Try that from the node that owns the CSV. The SysInternals tools would be a more scientific way to look at it.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 1:33 PM

  • If it's not encryption, then the next most likely probability is that it is open in a process. Usually, the system allows you to rename a file even if it's in use. Try that from the node that owns the CSV. The SysInternals tools would be a more scientific way to look at it.

    Checked that already. The files are not open in any process.

    I noticed that I have the same files in a "normal" directory. From there I could delete them without issues (which is, from what I read, also the expected behavior - I cannot read EFS-encrypted files but I can delete them.)

    So it seems to be specific to CSVFS. Maybe THERE these filenames are not allowed or something ...

    Thursday, March 23, 2017 1:42 PM
  • I can create and delete a file named "-d" on the root of a 2012 R2 CSV. I am deleting them directly by exact filename, not sure if that's different from your process.

    Have you tried moving CSV ownership to another node and deleting it there?


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 1:45 PM
  • I can create and delete a file named "-d" on the root of a 2012 R2 CSV. I am deleting them directly by exact filename, not sure if that's different from your process.

    Have you tried moving CSV ownership to another node and deleting it there?

    Well, the file was created by some application, maybe there are some hidden characters in the name or something.

    I have tried moving CSV ownership to the other node and delete from there, but it did not help.

    Thursday, March 23, 2017 1:48 PM
  • Hmm, I don't think NTFS would allow that. Spaces are valid, but not if they are leading or trailing characters. I don't think that it can hide anything from you.

    Are there other files in the root or just folders? What I'm getting at is, can you finagle it so that they would be targeted by a "del *.*" in a way that would not impact anything else?


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 1:56 PM
  •  

    Are there other files in the root or just folders? What I'm getting at is, can you finagle it so that they would be targeted by a "del *.*" in a way that would not impact anything else?

      

    Of course I tried that before, without success.

       

    With Powershell, the trick works ("get-childitem" works while "get-childitem *-*" fails). But still I can't delete the file.

    Thursday, March 23, 2017 2:00 PM
  • Well, the PowerShell path provider doesn't have the decades of development behind it that the command prompt does and you are working with an outlying case. I have inadvertently been able to fool the PS path provider more than a few times. I would not use PowerShell for this.

    You run into similar issues with icacls and takeown. They're not part of the OS and they might be stumbling on the name as well. So, even if del would work, you may not be able to get the file to a condition where del can touch it?

    If you think it might have something to do with the CSV filter, an option would be to remove the volume from CSVs and try to manipulate the files on a standard cluster disk with a drive letter. You'd need a downtime window for the VMs.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 2:16 PM
  • I've had issues like this that I actually used robocopy to fix. Assuming you don't have anything else on the CSVFS you can try this. Basically robocopy a directory with nothing in it or a single test file to the root of the target with corrupt/problem files and use the /MIR switch. For whatever reason, Robocopy was able to remove the files in those situations when I couldn't get any other commands to manipulate them at all.
    Thursday, March 23, 2017 3:09 PM
  • I've had issues like this that I actually used robocopy to fix. Assuming you don't have anything else on the CSVFS you can try this. Basically robocopy a directory with nothing in it or a single test file to the root of the target with corrupt/problem files and use the /MIR switch. For whatever reason, Robocopy was able to remove the files in those situations when I couldn't get any other commands to manipulate them at all.

    I had the same idea (using ROBOCOPY to delete folders with too long filenames). But in that case it does not help:

    

    Thursday, March 23, 2017 3:15 PM
  • I did some testing with icacls and takeown using the syntax from your screenshots and it all worked for me.

    Next, I removed a disk from Cluster Shared Volumes. I copied a file with the E flag to it and re-added to CSVs. I am getting access denied errors when I try to do anything with it. After removing it from CSVs again, I can manipulate the file just fine.

    I haven't worked with EFS in over a decade and my test systems aren't configured for anything beyond defaults. However, I don't believe that EFS is supported with CSV. BitLocker is the only native encryption system that works with CSVs, as far as I know. I think you're looking at a conflict between EFS and CSV. I'm not even able to get an encrypted file onto a CSV directly, so even that appears to be a cool trick that you did.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 3:31 PM
  • Thanks!

    Yes, looks tricky. Will try to remember how exactly I did that.

    Probably I'll find a slot for downtime during a weekend so that I can fix this. 

    Thursday, March 23, 2017 3:35 PM
  • Ha! I know now how I created it. I ran

    mpclaim -v -d 5

    But MPCLAIM does not take a disk number for the "-v" parameter; instead it expects the name of an output file. The command sends the output of "mpclaim -v" to a file called "-d 5", which then, for whatever reason, is encrypted and 0 bytes in size. Only using "-d" creates the "correct" file.




    • Edited by svhelden Thursday, March 23, 2017 3:45 PM
    Thursday, March 23, 2017 3:44 PM
  • Wow. I'll bet that's not the expected behavior. Looks like someone took a shortcut on input sanitization.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 3:52 PM
  • So it seems, but I can't find anything strange from Procmon log:

    https://www.amazon.de/clouddrive/share/HEPAvWCPNZkZbXAT1DwXpPhcKYrF11Et4oGk61JfMyB?ref_=cd_ph_share_link_copy

    Seems to simply create a file called "-d". No idea why it is encrypted afterwards.

    Thursday, March 23, 2017 4:03 PM
  • Well, when inputs aren't cleaned up, code branches in unintended ways. It looks like it's just passing along whatever you supply for the -v parameter to its internal routine for file creation without any further thought. The "5" probably has something to do with it. Fifth letter of the alphabet and all. ETA: that could just be coincidence. It's not like I've disassembled the EXE to see what happens.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.


    Thursday, March 23, 2017 4:08 PM
  • It looks like it's just passing along whatever you supply for the -v parameter to its internal routine for file creation without any further thought. The "5" probably has something to do with it.  

    I noticed that "mpclaim -v <filename>" always produces an EFS-encrypted file. And this is even documented:

           -v        Display detailed information about current configuration.
     filename        Configuration output file name.
     encrypt_option  If omitted, file will be encrypted by default.
           -n        File will not be encrypted.

    For whatever reason, writing this to a CSV leads to my issue ...

    • Edited by svhelden Thursday, March 23, 2017 4:34 PM
    Thursday, March 23, 2017 4:32 PM
  • Interesting. Its log file in \windows\system32 is encrypted as well. I guess it just really likes encrypted files, and whatever it's doing to create them bypasses the CSV filter's normal checks.

    So the lesson is, never use mpclaim -v when the current path is on a CSV or you'll get an untouchable file.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.


    Thursday, March 23, 2017 4:34 PM
  • So it seems. I guess I'll then just create a new volume and migrate the VMs' storage over there. 

    Still, weird .. and thanks for your efforts!

    Thursday, March 23, 2017 4:38 PM
  • Well, it doesn't appear to be hurting anything, so if you can get some downtime it would be faster to pop it out of and back into CSVs during a maintenance window. If you can't, then yes, a storage migration appears to be your only choice.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 23, 2017 4:42 PM
  • Hi,
    Are there any updates on the issue?
    You could mark the reply as answer if it is helpful.
    Best Regards,
    Leo

    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, March 30, 2017 2:42 AM
    Moderator
  • I still had no time to work on this, but obviously there are two options:

    A) Shut down VMs using the disk, remove those from cluster, remove disk from CSV, delete files, re-add disk to CSV, add VMs to cluster, power up VMs

    B) Create new SAN volume, live-migrate all VMs using the disk to the new volume, remove problematic volume

    • Marked as answer by svhelden Thursday, March 30, 2017 7:44 AM
    Thursday, March 30, 2017 7:44 AM
  • You have four choices, and you've picked the most difficult two.

    1. Do nothing. These objects are consuming an otherwise unused slot in the MFT and that's about it. They are a cosmetic nuisance.
    2. Your option A without all of that about removing and re-adding VMs. As long as the VMs are off, they'll survive the disk's removal and re-addition to CSVs just fine. Worst case scenario is that the CSV doesn't remember the path name that you originally assigned it and you have to rename it again. If you move quickly, your VMs will be off for less than a minute.

    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 30, 2017 12:55 PM
  •  

    1. Your option A without all of that about removing and re-adding VMs.  
    I wasn't sure if cluster allows removal of a CSV that is in use (holds VM config files) ...
    Thursday, March 30, 2017 12:57 PM
  • "In Use" meaning open handles -- it might not like that.

    "In Use" meaning has some files on it -- it doesn't care.

    I just tested it on my lab system to check my work, and all was well. The only delay was that it didn't get a drive letter right away after being removed from CSVs so I had to set that first. I stuffed it back into CSVs and was able to start the VMs on it right away.


    Eric Siron
    Altaro Hyper-V Blog
    I am an independent contributor, not an Altaro employee. I accept all responsibility for the content of my posts. You accept all responsibility for any actions that you take based on the content of my posts.

    Thursday, March 30, 2017 1:05 PM