NTFS Chkdsk Best Practices and Performance

Claus Joergensen, one of the founding fathers of Storage Server, had a great post today about a new white paper, available here, discussing the best practices and guidance for volumes. The paper also has details on self-healing NTFS and Chkdsk execution times on 2008 R2.

When planning file deployments, we are often asked questions such as “How large can I make my volumes?” or “How long will it take to repair a volume?”. This white paper helps answer these questions.

Table of Contents:

  • Self-Healing and Chkdsk
  • How to run Chkdsk
  • Chkdsk Exit Codes
  • Improving General Availability of the Server
  • Chkdsk performance on 2008 R2
  • Block Caching Improvements in 2008 R2
  • Effect of Volume Size on execution time of Chkdsk
  • Effect of Number of files and Different OS versions on execution time of Chkdsk
  • Effect of Physical Memory at different Number of files on execution time
  • Effect of short file names on Chkdsk execution time
  • Effect of Enabling/Disabling short file names
  • Conclusion
  • Call to Action
  • Resources   

If you are planning a file server or is looking to upgrade an existing Windows file deployment to Windows R2, you should consult the white paper. It outlines how with 2008 R2, can scale to easily support 15 TB file systems with 10 million files! Even with hundreds of millions of files the Chkdsk execution times are really fast. My favorite statistic is that a volume with 300 million files is able to Chkdsk in about 6 hours, that is so much faster than the old days.

SOURCE: http://blogs.technet.com/b/storageserver/archive/2011/02/23/guidance-for-sizing-ntfs-volumes.aspx

Translate »
%d bloggers like this: