James O'Neill: Many years ago - before on-line meant "the internet" - I annoyed a journalist in an on-line discussion. I criticized the methodology used by his magazine to test file servers: Machines copied a large file making it a test of the cache effectiveness of the server. As more machines and hence more files were added performance rose to a peak, then the total of files being coped exceeded the cache size, and it plummeted. This they explained as "ethernet collisions".
I mention this because there's always a temptation to try to rip up a Benchmark someone else has done (I certainly didn't use very diplomatic language back then). Single task tests can give you an idea how well a machine will carry out a similar task. What do file server tasks look like ? Realistic tests for file servers are hard. For virtualization it is close to impossible. If you a take a real world question like "I have 100 machines; when I multiply their CPU speed by their average CPU loading it they average out at 200MHz. How many Servers do I need ?" Obviously it's more than 100x200 = 20GHz / (cores * Clockspeed) ... but how much more ?" You need to answer questions like "What's the overhead of running VMs ?" Would 5 servers running 20VMs have a bigger or smaller percentage overhead than 2 servers running 50 ? Assuming you could work this out and come up with an "available CPU" figure, it doesn't answer questions about peaks of load i.e. "at any point in the last month would the instantaneous sum of CPU load totaled over a set of machines exceed the available CPU on the virtualization server ? And of course we haven't mentioned disk and network I/O questions.
|