We’ve grown massively at Cloud-A since our launch in 2013, about 25% month-over-month on average and there seems to be no slowing down so far. Our goal was to bring true public cloud to Canada with no tradeoffs in functionality, performance or security when compared to the global market leading public clouds, and at least to some degree we’ve successfully achieved that goal.
Looking back at the last year, analyzing our customer base and interfacing with partners and clients, I’ve learned a lot about why and how organizations are consuming our infrastructure-as-a-service. Canadian residency, utility billing, and the ability to deploy, scale and manage infrastructure on demand are the usual responses, which positions us well against virtualization platforms and “legacy” clouds, but I find it amazing how few clients are using Cloud-A to its full extent and maximizing their value. The beauty of Cloud-A, and OpenStack in general, is the elasticity and agility it can provide organizations if they are leveraging all of its functionality.
Advanced Functionality via Automation
Flexibility is baked into OpenStack’s architecture. By definition there is literally an API for every aspect of OpenStack allowing users the ability to design and manage their own infrastructure. When fully understood and appreciated this aspect of cloud technology is where the ultimate value is. That’s the point when users typically realize it’s not just about moving existing applications to a virtualized “Cloud” server but that storage, compute, and SDN networking can be harnessed independently to maximize efficiencies and thus business value. IT requires a different perspective, one that holistically encompasses the business case and then architects the IT solution around it, automating the workflows and taking advantage of the utility model so that scalability is in effect no longer a concern as the old paradigm of hardware is no longer the limiting factor.
Utilizing Cloud-A load balancers and splitting workloads between multiple instances, or federating your Cloud-A infrastructure with another cloud provider can allow for high availability, virtually eliminating downtime.
Horizontal Scalability (scale out)
Horizontal scalability is the clustering of multiple instances to expand the size of a deployment. The process of horizontally scaling your Cloud-A infrastructure can be automated to grow as required with the proper utilization of Cloud-A APIs and DevOps tools.
Vertical Scalability (scale up)
Cloud-A environments can be manually vertically scaled quite simply by utilizing snapshot functionality, but at scale, automating this process can be hugely beneficial, allowing the compute resources of your existing infrastructure to expand or contract automatically as required.
How do we Automate?
The wildly growing movement towards the automation of infrastructure operations for the goal of efficient and focused development is known as DevOps. “DevOps culture” is being adopted in enterprises all over the world, and a massive emergence of DevOps tools and services are supporting this movement of marrying development with operations.
There are many tools available to help with the process of automating the deployment and ongoing management of infrastructure on Cloud-A. The following tools are relevant not only to those who are building applications, but for anyone running infrastructure in an OpenStack public cloud at any scale.
Ansible – Ansible is an open-source software platform for configuring and managing computers. It combines multi-node software deployment, ad hoc task execution, and configuration management.
Puppet – Puppet is a configuration management system that allows sysadmins to define the state of their infrastructure and then automatically enforces that state. Puppet automates time consuming processes usually performed manually by sysadmins.
Chef – Chef allows users to automate how organizations build, deploy and manage their infrastructure by turning that infrastructure into code. Chef relies on reusable definitions known as recipes to automate infrastructure tasks.
Docker – Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.
docrane (https://github.com/CloudBrewery/docrane) docrane is an open source project started by Cloud-A co-founder Jacob Godin. docrane is a Docker container manager that relies on etcd (a distributed key->value store) to provide relevant configuration details. It watches for changes in configuration and automatically stops, removes, recreates, and starts Docker containers.
Platform-as-a-service providers (PaaS) use automation to provide preconfigured and optimized platforms to develop, run and manage web applications without the complexities of building and configuring the stack manually.
Cloud 66 (www.cloud66.com) – Cloud 66 is Cloud-A’s PaaS partner of choice. Cloud 66 provides everything users need to deploy, scale and protect their applications on Cloud-A in a single tool. Cloud 66 currently supports Ruby frameworks such as Ruby on Rails, Sinatra and Padrino as well as Docker containers.
There are many organizations who focus on providing DevOps services that include architecting development environments, implementing those environments, and coaching the implementation of DevOps culture within organizations.
Lyrical Security (www.lyricalsoftware.com) – Lyrical Software is Cloud-A’s go-to DevOps service partner. Lyrical is a well-oiled group of DevOps Engineers, based in Toronto, Canada. With a history building and deploying software since the late 90’s, Lyrical has experience automating at scale, deploying into clouds, private datacenters, even shrink-wrap software, all while supporting several languages.
Last week we launched a revolutionary new product at Cloud-A that we designed to dramatically increase the speed and efficiency of instance snapshots on Openstack cloud platforms. We built this product based on feedback we have received over the past year from our users and partners who demanded more efficient snapshotting.
There are a couple of inherent challenges with the default OpenStack’s snapshot capability out of the box including:
1. Snapshotting a VM requires the VM to be paused temporarily. Depending on how large a VM is, the pause could be anywhere from a few seconds to several minutes – which is unacceptable for a mission critical server.
2. OpenStack’s standard configuration is that the compute node hosting the VM being snapshotted also performs the snapshot operation. This can create what is called the “noisy neighbour effect,” where the process of snapshotting a VM can have a negative performance impact on another tenant’s VM.
With this new technology we’ve developed, the compute resources are isolated and allocated to tenant VMs from the compute resources that serve VM snapshots, increasing the speed of snapshots and decreasing the pause time of a VM by an average of 74%, as well as preventing the noisy neighbour effect.
This new product is now live on Cloud-A and available at no additional cost.
Two weeks ago we announced our new ultra-fast SSD architecture and we’ve been thrilled with all of the great comments, feedback, and praise. We are always happy to hear about how excited our customers are when they try out a high performance cloud server for the first time, or having their existing servers operate more quickly. We re-ran The Decapo benchmarks that we published a few months ago, and here’s how it compared directly to our previous benchmark results:
On average we saw a 6.15% performance increase across the board. We then factored those numbers back into the normalized compute value comparison to put into a rationalized cost perspective as seen below. You can see that the new SSD architecture adds even more bang for your buck compared even to our last tests.
We’re very supportive of what Serverbear.com is doing with having crowd sourced independent and standardized test results. This style of benchmarking seems to be getting a lot of momentum, so we’ve compiled some data we found there after running their test suite on one of our 8GB-HC instance and have noticed what seems like an interesting and counter intuitive trend between IOPS and cost between cost competitive VM sizes
Source data from Serverbear.com comparison view after running the benchmark on one of our VMs, you can check the comparison tab to see how we stack up.