locked
Disk space used RRS feed

  • Question

  • For the Utilization by resource Simulation result, does the utilization refer to disk throughput utilization or disk space utilization?  If it is for disk throughput utilization, how can I find the disk space utilzation?

    It would ben really good to know how big the Exchane databases will grow and how many log files will be produced over a given period.

    Thanks.

    Ben
    ..


    Thursday, October 20, 2005 5:51 AM

Answers

  •  Ben Jackson wrote:
    Thanks.  But does it display what the actual database (and log) size will be if you don't get a model violation?



    No, currently it doesn't. But as Steven mentioned before we are considering this for the future releases.
    Thursday, October 20, 2005 11:31 PM

All replies

  • The utilization reported by SCCP refers to disk I/O. You get a count of bytes read/written per second and the percentage of the time that the disk device is "busy". SCCP Beta 2 does not support disk space utilization, but we are looking into this for a future release.

    For the Utilization by resource Simulation result, does the utilization refer to disk throughput utilization or disk space utilization? If it is for disk throughput utilization, how can I find the disk space utilzation?

    It would ben really good to know how big the Exchane databases will grow and how many log files will be produced over a given period.

    Thanks.

    Ben
    ..




    Link
    Thursday, October 20, 2005 3:52 PM
  •  Ben Jackson wrote:
    It would ben really good to know how big the Exchane databases will grow and how many log files will be produced over a given period.


    The tool actually calculates the disk space requirement for log and data based on the number of mailboxes and the other usage parameters. It then compares the required disk space with the available storage space in the servers or SAN (depending where the data and log files are mapped to). You will get a model validation warning (the area in the bottom of the model editor screen) if the required storage space is greater than what is available.

    Thursday, October 20, 2005 10:28 PM
  • Thanks.  But does it display what the actual database (and log) size will be if you don't get a model violation?

    Thursday, October 20, 2005 11:22 PM
  •  Ben Jackson wrote:
    Thanks.  But does it display what the actual database (and log) size will be if you don't get a model violation?



    No, currently it doesn't. But as Steven mentioned before we are considering this for the future releases.
    Thursday, October 20, 2005 11:31 PM
  • Pavel,

    This was an important post for me as I have been simulating some MOM 2005 configurations and I was a bit worried when my database volume was 80% used on a 288GB partition!  That's a little bit higher than the 30GB recommended.

    I would also like to see the disk usage added to the tool.

    Cheers
    Dave

    Thursday, February 9, 2006 3:22 PM
  • One other question on disk usage.  Does the tool, when estimating disk space usage, take into consideration the 30 GB recommendation, actually the 40% of 30GB?

    Cheers
    Dave

    Thursday, February 9, 2006 8:55 PM
  • Could you please clarify your question?

    One other question on disk usage. Does the tool, when estimating disk space usage, take into consideration the 30 GB recommendation, actually the 40% of 30GB?

    Cheers
    Dave

    Link
    Thursday, February 9, 2006 10:14 PM
  • Sure.

    The database for MOM 2005 has a recommended limit of 30 GB, of which it is also recommended that 40% of that 30GB remain free.  So it is recommended that you only use a maximum of 18GB of that 30GB for data.  Does the capacity planner take this into consideration when it is doing the simulation and/or validity check?

    Cheers
    Dave

    Thursday, February 9, 2006 11:11 PM
  • The capacity planner takes this 40% free space into account initially when recommending the necessary disk space, along with the number of days to retain the data that is specified by the user. The calculation is as follows: total necessary disk space = (40% free space factor) * (number of days to retain data) * (total disk space utilization for the specified load).

    The required disk space calculation that occurs during pre-simulation validation is the same as the one used during sizing. For example, if by default we use the data retention factor of 30 days and do the initial sizing with this value you can change this parameter in the usage profile to, say, 100 days and you may get a validation error about insufficient disk space when you simulate.
    Friday, February 10, 2006 4:18 PM
  •  Steven Rosaria - MSFT wrote:
    The required disk space calculation that occurs during pre-simulation validation is the same as the one used during sizing. For example, if by default we use the data retention factor of 30 days and do the initial sizing with this value you can change this parameter in the usage profile to, say, 100 days and you may get a validation error about insufficient disk space when you simulate.

    Steven,

    I'm not seeing this behavior unfortunately.  I've got a rather large MOM deployment that I am architecting, close to 3000 servers will be monitored through MOM.  In using the MOM 2005 Sizing tool, I can only keep my database under 30GB by keeping the retention factor set to 6 days.  In the Capacity Planner, I have it set for the maximum, 60 days, and the simulation completes with no errors.  I have also change the number of events higher.

    Any suggestions on how I can resolve the differences?

    Cheers
    Dave

    Friday, February 10, 2006 4:35 PM
  • For a given deployment, SCCP calculates the number of disks necessary for disk capacity, i.e. the amount of disk space that will be needed. At the same time, SCCP also computes the number of disks that will be necessary to handle the throughput; in other words, based on the specified load, SCCP determines how many disk spindles are necessary to be able to handle the projected disk utilization. The final number of disks that SCCP recommends will be the maximum of these 2 numbers. You can try this out as follows:
    1. To find out the number of disks needed for capacity, select a disk with very small storage capacity in the Specify hardware preference page of the wizard. The recommendation you get for the number of disks will be based on how much storage space your deployment needs.
    2. To find out the number of disks needed for throughput, select a disk with very large storage capacity in the Specify hardware preference page of the wizard. In this case, the recommendation will not be bound by the storage size of individual disks, but rather by the actual disk throughput that the deployment is expected to have.

    For your case I tried a simple experiment, as I don't have the full details of your deployment. If you go through the MOM wizard and specify that you are monitoring 3000 Windows servers and are using SCSI 320, 15000 RPM, 36 GB disks with all other parameters at their default value, SCCP suggests 2 volumes, each with 8 disks for a total of 290 GB. As explained above, this will be the number for capacity.
    Next, go back to the Specify hardware preference page, but now select a SCSI 320, 15000 RPM, 146 GB disk configuration. SCCP will recommend 2 volumes, each consisting of 8 disks for a total of 1168 GB. This is the number of disk spindles that will be needed for disk throughput. In this particular case, you get the same number of disks based on both capacity and throughput. You can try smaller disks in the Model Wizard and assess the capacity requirement by attaching an artificially small disk of 1 GB and observing the validation warning message that would contain the required space.

    SCCP does not take into account the disk space actually being used by the MOM database when running a simulation; instead, it makes a recommendation for the required disk space based solely on the capacity and throughput criteria mentioned earlier. This means that the total disk size recommended for a deployment may seem too much, but the throughput requirement for the predicted workload must be satisfied.

    Another item to take into consideration is that the MOM sizer is based on the average workload of a wide variety of deployments. In other words, the total number of events, alerts, etc. in the MOM sizer was averaged across various deployments and distilled into a single number. SCCP allows the user to specify the individual number of data samples collected for each of its management packs, but at this point it is not yet known what values to enter in the SCCP MOM profile such that the workload will exactly match the assumptions of the MOM sizer.
    Saturday, February 11, 2006 12:38 AM
  • Steven,

    Thanks very much for this.  I think I understand the process this uses now.

    Cheers
    Dave

    Monday, February 13, 2006 3:40 PM