There is a huge buzz around Docker these days, and we are huge fans of it at Cloud-A. Docker is an essential component of our Continuous Integration System and it also runs Cloud-A’s management plane. We get lots of questions from customers about how they can run their containers on Cloud-A. Here is a look at how you can use Cloud 66 DevOps-as-a-service to deploy, scale and product your doctor container on Cloud-A.
G&G Computers, based in Truro, Nova Scotia, is a leading custom PC builder, and service provider who supports and manages many SMB’s network infrastructure in within Colchester County.
Traditionally, G&G Computers used an American based Cloud backup solution for their clients that required them to purchase large blocks of storage upfront. This solution lacked elasticity and didn’t scale well with the demand for their services, also the fact that their clients’ data was being backed up to an American data centre was an issue for their customers who have concerns about privacy, like the doctors offices, law offices and other SBMs G&G Computers supports.
G&G Computers made the switch to Cloud-A as their offsite cloud backup for their client environments, utilizing Cloud-A Bulk Storage, powered by OpenStack Swift and their own branded backup solution. Moving to Cloud-A has allowed G&G to grow their cloud storage needs on their terms, with no need to buy huge blocks of storage space up front, or export their client’s data to an unknown data centre in the USA. G&G was also happy to work with a fellow Nova Scotian tech company.
The new backup solution has resulted in 30% cost savings in their backup costs, which has allowed them to pass costs savings on to their customers. G&G can now assure their clients that their backup data is close to home in Cloud-A’s primary data centre facility in Halifax, Nova Scotia.
It is no secret that the amount of the data everywhere is growing. Terms like IoT, Big Data and M2M have a certain hype cycle around them, but at the end of the day they are all very relevant concepts. The requirements for data storage are growing at over 50% per year and IDC and EMC predict that the “digital universe” will amount to over 40,000EB, or 5,200GB per person on the planet, by 2020 – the majority of which is unstructured data.
Data Management and Internal Cost per GB
One of the biggest struggles for organizations in the coming years will be to manage these large, growing pools of data. Most enterprises utilize Storage Area Network (SAN technology) for their primary internal storage requirements, which leverage super fast storage media (SSD, Flash) and fibre channel networking to offer lightning fast IOPS and redundancy. The problem with SAN technology is that it is expensive. Every organization will have a different cost per gigabyte of storage, as it depends on many factors including: the type of SAN (FC/iSCSI,) the manufacturer and model, Capacity licensing , Support requirements and Type/capacity/speed of disks supported
Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
Business Benefits of CI
CI Reducing Risk
When you integrate code frequently, you reduce the risk level of any project, though it is essential to have metrics in place to measure the health of the application so that defects in code can be detected sooner and fixed quicker. Frequent integration also means that the gap between the application’s current state and the state of the application in development is smaller.
Yesterday we posted about the benefits of using Cloud-A for your testing, development and quality assurance environments. We highlighted some of the features and functionality that Cloud-A provides to enable fast time to market for your software product, elasticity for growth and quality assurance, but now we are going to prove the business case by looking at the numbers and show you can save over 80% on your test/dev environment by moving to Cloud-A.
Consider the following real world scenario…
Real World Scenario
Our team recently helped a customer through the exercise of finding their true costs for their new utility billed cloud infrastructure. Together, we mapped out a testing, development and quality assurance environment over a three year period with the following requirements.
|Test Environment||Development Environment||QA Environment|
|1 x 1GB Web Server||1 x 1GB Web Server||1 x 2GB Web Server|
|1 x 2GB Database Server||1 x 2GB Database Server||1 x 4GB Database Server|
|Usage: 20 hour per week||Usage: 40 hour per week||Usage: 40 hour per week|
On-premise Hardware Test/Dev/QA Model
Many organizations procure expensive, on-premise hardware for their testing, development and quality assurance environments. In addition to the up front capital costs, ongoing hardware management and initial capacity planning are things to consider.
- Fixed overhead cost
- Initially under utilized
- Eventually over utilized
- Hardware failures are users responsibility
- Slow, manual VM deployment
- No APIs
|Capital Purchase||Total Cost|
|Dell PowerEdge R420 – 8 core, 32GB RAM, 2.4TB (spec’ed for growth)||$12,674.00|
|VMware vSphere Standard||$1,465.50|
|Total monthly cost||$392.74|
|Cost per year||$4,712.83|
|Total cost over 3 years||$14,138.50|
Traditional Hosted VM Test/Dev/QA Model
The Canadian “cloud” market is dominated by VMware based virtualization platforms that lock users into expensive contracts. While the responsibility of managing hardware is offloaded in this model, VM deployment is often slow and users incur monthly fixed costs.
- Locked into contracts
- Slow VM deployment, often requires service provider intervention
- No APIs
- No Utility Billing
|VM Quantity||Flavour||Price per VM||Total Cost (monthly)|
|Total monthly cost||$492.50|
|Cost per year||$5,910.00|
|Total cost over 3 years||$17,730.00|
Cloud-A Test/Dev/QA Model
Cloud-A offers a flexible, agile and elastic environment for testing, development and quality assurance. Users only pay for what they use and incur a nominal storage charge when their stacks are turned off, resulting in dramatic cost savings.
- No contracts, no lock-in
- Rapid VM deployment
- API driven infrastructure
- Utility Billing
- Agile and Elastic
- Scale cost with growth
Cloud-A Total Monthly Compute Cost
|VM Quantity||VM Flavour||Usage Hours per month||Hourly Cost||Total Cost|
Cloud-A Total Offline Storage Cost (based on 720 hour months)
|VM Quantity||VM Flavour||Root Disk||Offline Storage Hours||Hourly Cost||Total Cost|
|Monthly compute cost||$43.80|
|Monthly offline storage cost||$29.54|
|Total monthly cost||$73.34|
|Cost per year||$880.08|
|Total cost over 3 years||$2,640.24|
Average savings with Cloud-A: 83% or $4,431.31 per year
When comparing the cost of the same testing, development and quality assurance environment from this real world example on on-premise hardware, traditional VM hosting provider and Cloud-A, Cloud-A on average provides 83% savings or $4,431.31 per year.
At the forefront of modern application architecture is a concept known as DevOps, or the marrying of development and operations into one harmonious, collaborative and cooperative effort, all with the goal of rapidly producing software and improving operations performance.
As a result of the trend toward DevOps within organizations, a market of DevOps tools have emerged that enable automation and faster, higher quality software. Here is a list of DevOps tools that our partners, our customers and even the Cloud-A team use on a daily basis to deliver software.
Chef is a systems and cloud infrastructure framework that automates the building, deploying, and management of infrastructure via short, repeatable scripts called “recipes.” But the real power of Chef may be in its use of pluggable configuration modules (aka cookbooks), nearly 2,000 of which are available via the Chef community. High-profile Chef user Facebook recently open-sourced some of its own Chef cookbooks, including its Taste Tester testing framework and Grocery Delivery, which watches a source code repo, such as Git, and keeps the local Chef server in sync.
The University of Pennsylvania’s Wharton School is a Chef user as well. “Chef automates complex tasks that are otherwise time- and resource-intensive, but more importantly it allows us to focus our efforts on innovating and improving the quality of our services,” says Sanjay Modi, a technical director at the school, in a case study on Chef’s website. “It also opens the door to more collaboration and efficiency across the organization.” Chef has been used by Wharton to automate configuration management for Amazon EC2 resources, Linux nodes, and local virtual machines.
Docker brings portability to applications via its containerization technology, wherein applications run in self-contained units that can be moved across platforms. It consists of Docker Engine, which is a lightweight runtime and packaging tool, and Docker Hub, a cloud service for application-sharing and workflow automation.
“Docker has been a vital part of Yelp’s next-generation testing and service management infrastructure,” says Sam Eaton, engineering director at Yelp, in a case study on the Docker website. “Isolation of dependencies and rapid spin up of containers has allowed us to shorten development cycles and increase testing speed by more than four times.”
Puppet Enterprise, from Puppet Labs, offers data center orchestration by automating configuration and management of machines and software. Version 3.7, the latest release, features Puppet Apps, purpose-built applications for IT automation, including Node Manager, for managing large numbers of systems that are changed often. An open source version of Puppet is also available.
Stanford University uses the open source version of Puppet “to bridge the gap between the software development that we need to do to create new kinds of digital library services and the systems administration that we need to do to keep those services running in a high-performant, secure way,” says Stanford’s Bess Sadler, in a video testimonial on Puppet’s website. Developers have become more involved in systems administration, while systems admins have deepened their involvement in software development, enabling quicker development of applications, she says.
Ansible is an open source IT configuration management, deployment, and orchestration tool. It is unique from other management tools in many respects, aiming to provide large productivity gains to a wide variety of automation challenges. While Ansible provides more productive drop-in replacements for many core capabilities in other automation solutions, it also seeks to solve other major unsolved IT challenges. These include clear orchestration of complex multitier workflows and cleanly unifying OS configuration and application software deployment under a single banner. Hootsuite uses Ansible to build any server from scratch and repeat this as many times as they want within their infrastructure and are looking to expand use for app deployment. “Ops and Devs both feel safer, literally. Before they were always worried about ‘what if the server dies’, but not any more after all servers are properly ‘Ansiblized’.” says Beier Cai, Director of Technology at HootSuite Media, Inc.
One DevOps tool that is close to our hearts at Cloud-A is docrane – and that is because it is our project! Docrane is an open source Docker container manager that relies on etcd to provide relevant configuration details. The program watches for changes in configuration and automatically stops, removes, recreates and starts your Docker containers, enabling more seamless continuous integration. “Docrane has allowed us to significantly accelerate and improve our Continuous Integration process at Cloud-A. When managing an increasingly large network of OpenStack nodes, having the ability to ensure that new versions are deployed quickly and reliably is key,” says Jacob Godin, CTO of Cloud-A.
Over the next several weeks, we will be highlighting effective use cases of these DevOps tools with Cloud-A including tutorials in an effort to educate and promote modern application architecture, faster time to market and better quality software.
Krill, Paul. ‘7 Cool Tools For Doing Devops Right’. InfoWorld 2015. Web. 20 Apr. 2015.
It’s great to see so much positive change happening in the Canadian landscape with respect to cloud education and adoption. We believe that is due in no small part to the hard work and innovative vision that organizations like CANARIE have for the future of high tech networking and infrastructure in Canada. The DAIR program, which was launched in 2013 is a great example of that (http://www.canarie.ca/cloud/), it has provided Canadian entrepreneurs and small businesses with free cloud-based compute and storage resources that help speed up time to market by enabling rapid and scalable product design, prototyping, validation and demonstration. Since its inception, the DAIR program enabled many new Canadian startups to benefit from the scale, speed and agility of cloud technologies to transform their business processes and get to market faster. Read more
The cloud is still very new to most organizations to the point where it is still widely unknown. This is especially true when you see RFPs being requested for cloud infrastructure services with all sorts of inherent challenges and logical incongruence within them.
We partner with solution integrators specializing in cloud consulting & delivery so although we at Cloud A do not bid on RFP’s directly, we would of course be more than happy to partner with organizations looking to leverage true cloud solutions in Canada. Our partners have helped over 400 organizations achieve their strategic objectives by migrating to Cloud A over the past 15 months. As a result, together we have a deep understanding of the approach and cultural shift required to realize successful change in this new era of computing.
This article hopes to clarify some of the considerations and actions to take when preparing an RFP to maximize the potencial benefits cloud computing has to offer. The goal of this article is to make the RFP attractive to the leading cloud providers and solution integrators from whom you are seeking submissions. There are many factors that are different to your traditional hardware and application requests of the past. Simply, the cloud is a new realm in computing with a lot to know and there are many other folks out there repackaging old tech as “cloud” these days; that is something you need to be careful of. Read more
This week the Cloud-A team will be at the 5th annual Atlantic Security Conference in Halifax, Nova Scotia. The Atlantic Security Conference was recently listed as one of The Top 50 Must-Attend Information Security Conferences in the world by Digital Guardian, and we are looking forward to interacting with Atlantic Canada’s infosec community, representing the OpenStack community and promoting the use of true cloud computing.
This year Cloud-A is offering a free one hour DevOps consulting session for any organization who attends AtlSecCon and signs up for a Cloud-A account.
Stop by our table and say Hi! For anyone unable to make it, check our twitter account (@CDNCloudA) for live AtlSecCon updates.
We’ve seen a fair amount of growth in the government sector recently and as result thought it would be a good idea to help describe how our clients are leveraging the first Public Cloud that was founded in Canada by Canadians. We’ve seen some great steps recently from the Canadian Government to facilitate Shared Services standards that envision inclusion and usage of public Cloud providers like Cloud A in Government infrastructure. Recent Canadian Government documents like the “IT Shared Services Security Domain & Zones Architecture” specify the standards & best practice guidelines so that in the future shared ITC services can be transposable for the use of similar shared services offered through a public cloud provider under contract to the GC.