Nick White: Measuring the performance of an operating system is a tricky thing. At the same time, it's the right and necessary thing to do, because performance is one of many criteria important to customers. Part of the trick of measuring performance is to time testing execution with the product cycle such that the results are as meaningful as possible for customers; this helps them make a better decision by making use of the full array of available information. As one example, about a year ago we commissioned a firm called Principled Technologies to conduct a study comparing Windows XP SP2 to Windows Vista RTM. That study found the performance measures of the two operating systems were within the same range for many tasks that home and business users frequently perform under real-world conditions.
My point is that we waited to conduct these benchmarking tests until Windows Vista had reached the RTM milestone in the product cycle, as this allowed us to provide our customers the most meaningful data available at the time -- the data most likely to directly affect their decision to upgrade to Windows Vista. We do a whole range of performance tests at every stage of the OS development process, but, as a general rule, we avoid sharing benchmark tests of software that hasn't gone RTM (i.e., final code). This explains why we have not to date published any findings of benchmark tests (nor commissioned anyone to do so) on performance improvements brought about by Windows Vista SP1. Publishing benchmarks of the performance of Windows Vista SP1 now wouldn't be a worthwhile exercise for our customers, as the code is still in development and, to the degree that benchmarking tests are involved, remains a moving target.
|