Recent Performance Gains As A Result Of Our New SSD Platform

Two weeks ago we announced our new ultra-fast SSD architecture and we’ve been thrilled with all of the great comments, feedback, and praise. We are always happy to hear about how excited our customers are when they try out a high performance cloud server for the first time, or having their existing servers operate more quickly. We re-ran The Decapo benchmarks that we published a few months ago, and here’s how it compared directly to our previous benchmark results:

chart_3-5

On average we saw a 6.15% performance increase across the board. We then factored those numbers back into the normalized compute value comparison to put into a rationalized cost perspective as seen below. You can see that the new SSD architecture adds even more bang for your buck compared even to our last tests.

We’re very supportive of what Serverbear.com is doing with having crowd sourced independent and standardized test results. This style of benchmarking seems to be getting a lot of momentum, so we’ve compiled some data we found there after running their test suite on one of our 8GB-HC instance and have noticed what seems like an interesting and counter intuitive trend between IOPS and cost between cost competitive VM sizes

cchat_1-10
rawcad

Source data from Serverbear.com comparison view after running the benchmark on one of our VMs, you can check the comparison tab to see how we stack up.

Screenshot_2015-05-12_11-23-17

Almost Unbelievable Performance Results!

Last week we were very excited when we saw this article on Java World came out:

Ultimate cloud speed tests: Amazon vs. Google vs. Windows Azure

It was great to see them going a deeper into the question of how the big cloud providers in the US are performing with standardized Java benchmarking tests. So of course we wanted to know how we stacked up using the exact same process and methods. We ran the same tests on Cloud A’s new 4GB, and 8GB High Compute instances and and here’s what we found:

chart_2-4

Here’s the raw data from the benchmarks that were used to build the graphs, you can see the exact scores for each test on each provider & see that we’re using the exact same calculations to determine our cost numbers.

Screen Shot 2014-03-13 at 12.55.42 PM

 

From the same dataset, we also graphed the total cost per run of the test. Not only does the test complete in less time on our instances, each compute instance costs less to run. The end result is a staggering depiction of cost savings.

chart_3-5

So, to be blunt, we even shocked ourselves! Our biggest fear at this point is that the world won’t believe us. If that’s the case let us know! We’ll give you free account to run the tests for yourself or introduce you to some of our early adopters who didn’t believe it before they tried it out for themselves either.