Donnerstag, 7. März 2013 15:24
We are currently running SQL 2012 Enterprise for our data warehouse. Our database is 500GB of data. All of our data is partitioned and compressed using page compression. Our largest fact table is 1.3 billion rows of data.
I have just been informed that we need to make changes to our licensing structures. Due to the cost of the enterprise version of SQL on 2012 (per core), we are looking at if we could drop back to BI version.
Our main issue is IO and disk space. We would obviously lose the compression and partitioning which will have a knock on effect on large reads and system performance. At our current understanding, it would be a lot cheaper to buy a flash storage unit for the DB and run BI edition than it would be to pay the Enterprise costs.
Does anyone have similar issues with large tables? How do you find the performance and behaviour of such large tables.
Freitag, 8. März 2013 07:14Moderator
Hi Michael Schreuder,
I suggest you can build clustered index on the large table properly to imporve the performance, for more infotmation about how to deal with large table in SQL Server, please see: http://sqlcat.com/sqlcat/b/top10lists/archive/2008/02/06/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse.aspx
If you have any feedback on our support, please click here
TechNet Community Support
- Als Antwort markiert Eileen ZhaoMicrosoft Contingent Staff, Moderator Donnerstag, 14. März 2013 09:16