My belief is that the answers are the same for Windows Server 2008 and Windows Server 2003 as well
The theoretical limits for NTFS are document in resource kits.
There is no hard formula to decide what a practical limit is. But I can offer some pointers as to what considerations matter:
- NTFS is well documented to use B Trees. One can work out B Tree balancing algorithms and come up with a number where thinsg become unacceptabley slow. Unfortunately, your CPU speed, system load, bus I/O capability, cache hits/misses, hard disk speed etc not to mention what one person cosiders unacceptably slow are all ill defined
- Some gurus believe that handle based renames & deletes are better than path based renames/deletes because NTFS already has the relevant data structures located for handle based APIs
- You can assume that somewhere along the line, somebody is using hashes. The higher the number of files in a directory, the higher the chances that you get a hash collision
- Beleieve it or not, even the names of the files matter. An application that generates "ABCDEFG.00001", "ABCDEFG.00002" would be pretty bad if your system was also generating short file names, not to mention hash collisons
You are already past my personal comfort zone now with 3.2 million files, not to mention 5
Dilip
www.msftmvp.com