MDS has two deployment configurations:
Most of the computational work for MDS is done in the database layer, so in most cases in order to increase capacity, a stronger SQL Server computer is required.
Master Data Services was designed to work with data that is relatively slowly changing. Data that is involved in a high volume of transactions (that in some cases are stored in fact tables) should not be stored in MDS.
From a performance perspective, MDS will likely not process more than 50K distinct changes per day per model. By distinct is meant a separate call (either to entity-base staging or the WCF API). If significant changes are required, it is recommend that you perform them in batches either through entity-based staging or the WCF APIs. Both interfaces are tuned for batch operations and will generate much better performance than performing each change separately.
If you need to perform more than 50K calls per day per model, then it is recommended that you perform a proof of concept on the designated hardware.
The following factors will have a direct impact on MDS performance:
Note: There is more information on row-level security in the “Impact of Row-Level Security on Performance” section later in this document.
Medium-range hardware should easily handle this capacity. However, when using a Virtual Hard Drive (VHD), it is recommended that you perform a proof of concept. When using a VHD, it is recommended that you monitor the host server over time, and verify that there is sufficient spare capacity.
For the large capacity model, see the recommendations below in the “Recommended Hardware” section. At this level, it is recommended that you perform a proof-of-concept.
The following are tips for securing MDS using hierarchy-member security:
HP Proliant DL360 G7
2 Intel E5606 CPUs (quad-core 2.13GHz 80W)
24 GB (6 x 4GB PC3-10600E)
256 MB Cache module for P410i
Enable HyperThreading and Turbo boost
Embedded P410i (SAS Array Controller) supports RAID 0 and 1 (part of the HW configuration)
8 x 10K RPM 2.5” HDD
*Storing multiple models may require adding more RAM to the server.
The real challenge is to know how many IOPS (IO per Sec) you will get from your storage and what the physical storage configuration behind the virtual disk is.
Also make sure that you dedicate a sufficient amount of memory and CPU as described above.
The tests were performed with up to seven concurrent users where each user was creating members, updating members, or reading members at full speed.
All times listed below are in seconds.
The main performance impact in relation to business rules is in the case of first-time validation or after changing one or more business rules. In this case, the validation process needs to validate each member.
There is no simple way to compute how each rule adds to the overall computation time because the computation is done in batches.
Note that later validation runs will only re-validate members that changed.
On the 7M customers long model, we had 12 different rules (default values, mandatory, uniqueness, concatenation, and more…), and it took about 2 hours to validate the entire entity.
The Cumulative Update #1 release of Microsoft SQL Server 2012 will include a set of performance improvements to these scenarios.
With those improvements, the performance results that were measured for a model with the same schema as the Customer (Long) model included the following: