none
DPM 2016, MBS and Storage Spaces RRS feed

All replies

  • Hi Ryan,

    Did you ever managed to get the answer to your question?

    I'm would like to run the similar setup with DPM Storage Disks sitting on the Storage Spaces.

    Cheers,

    Pawel

    Friday, December 30, 2016 9:40 AM
  • We used this article to setup dedup for DPM 2016.  As part of it, you create the disks and pass them into the VM and turn them into a storage space in simple mode.  It seems to work well enough
    Friday, January 20, 2017 8:23 PM
  • Hello Ryan,

    As mentioned earlier in this thread, please follow my article to setup dedup for DPM 2016. It works extremely well.

    Hope that helps!

    Thanks,

    -Charbel

    Sunday, January 22, 2017 3:51 PM
  • I have nothing but tremendous performance issues in UR1 with MBS using a similar configuration as described in your article---without dedup on the host.  The DPM guest system is using virtually all CPU resources and memory assigned to it.  Whatever I give it, it will eventually consume (most CPU use in msdpm.exe).  Hardware is not the limiting factor as it has dual six core Xenon processors and 80 GB RAM.  The same hardware was used in DPM 2012 R2 except it had only 1 processor and 16 GB RAM at that time.  The system was not upgraded from 2012R2, it was installed fresh.  In a typical day, I have around 4,000 failed jobs reported.  Most of these are because jobs do not get finished and another job for the same data source is started resulting in two jobs running at the same time for a single data source.  This was never an issue in 2012R2 with identical backup load.

    Until I got a private hotfix, Hyper-V 2016 systems using RCT were very problematic and were always undergoing consistency checks consuming high network bandwidth.  I still cannot get BMR backups to work consistently.  They report network related errors due to DPM breaking the connection between the DPM server and the system being backed up. There is no network problem as I can schedule Windows Server backup jobs on these same systems and have them write to a shared folder on the Windows Server 2016 system that is the Hyper-V host for DPM without ever failing.

    I am anxiously awaiting UR2 for a better experience.

    Thursday, February 2, 2017 10:05 PM
  • Yes, I am facing performance issues as well with UR1. DPM team is aware about this issue and UR2 contains a performance fix.

    The fix should be out in couple of weeks.

    Thanks!

    Friday, February 3, 2017 4:47 AM
  • To me, it seems like it's more than one issue.  I hope they're all fixed where this system is at least usable and can be trusted for backup purposes.
    Friday, February 3, 2017 2:28 PM
  • Charbel, 

    • In your post you've set the "UsageType" to "Backup" but in the last screenshot it's set to "Hyper-V". Which one should be used?
    • I've configured the the VHDX's exactly the same way as you did on the Hyper-V host, but I"m not seeing any savings and only a single file is in-policy. The below script runs for a minute and then stops:
    Start-DedupJob -Type Optimization -Memory 50  -Volume e: 
    • This is what it looks like. 
    • Only one file is InPolicy. Is that because the other files change every day??

    Thanks


    • Edited by Nordland Wednesday, March 29, 2017 5:11 PM
    Wednesday, March 29, 2017 4:47 PM
  • Hello Nordland,

    Please set the "UsageType" to "Backup".

    Do you already have data backed up to those VHDXs?

    Hope that helps!

    -Charbel

    Wednesday, March 29, 2017 5:17 PM
  • Yes, there is already data on the VHDX's. But not a lot, I could redo them if needed. 

    Also, I never made any changes inside DPM to the "datasourcetype", as I didnt think it was important.


    Thanks

    (side note, there is no way to create a login to your site and comment)

    Wednesday, March 29, 2017 5:22 PM
  • Hello Nordland,

    If there is not enough data, then you won't see a lot of saving.

    Yes, you can create login on my site and send comments.

    Many visitors are able to send comments.

    Please do so.

    Thanks,

    -Charbel

    Wednesday, March 29, 2017 5:26 PM
  • I've currently got 400GB of data in the 12 .VHDX files, so they are not empty. 

    In your policy, do you have the "Deduplicate files older than:" set? If so, what is it set to? I'm a little confused why more files are not "InPolicy".

    (could you post the link to sign up, I've looked but can't find it)

    Thanks again!


    Wednesday, March 29, 2017 5:30 PM
  • I changed the "Deduplicate files older than:" to 0 days in stead of 1 day. Now all of those files are part of the policy and are being deduplicated. 

    Thanks!

    Wednesday, March 29, 2017 5:57 PM
  • Charbel:

    Are you using UR2?  I still cannot get System State + BMRs to be consistently backed up using DPM 2016 UR2.  Are you doing BMRs backups with any success?  My error is usually one of these:

    Windows Backup encountered an error when accessing the remote shared folder. Please retry the operation after making sure that the remote shared folder is available and accessible. Detailed error: An unexpected network error occurred

    or:

    The semaphore timeout period has expired

    This data sources are a variety of operating systems.

    I don't any longer believe it has to do with DPM.  It may be something related to ReFS.  On my DPM server that is failing, I am also scheduling BMR backups using Windows Server Backup from the failing sources to place their output on a shared folder on this same DPM server, but on a different physical pass-through disk.  These scheduled backups worked 100% of the time until I changed the format from NTFS to ReFS.  Now, I'm having frequent errors such as these without DPM.

    I've also tried created a separate shared folder on the same ReFS drive used in DPM to store BMRs and those backups fail as well.  This DPM drive is set up as you described in your article using about 30 1TB VHDX files on the Hyper-V host, but without dedup enabled.

    Wednesday, March 29, 2017 7:26 PM