2-Factor Authentication for Cloud-A

With the growth and adoption of Cloud-A’s infrastructure services around the world, having thousands of active projects, and twice that number of active users — our responsibility to provide a secure entry point into the services that store your application’s private data, that help run your businesses day-to-day, is greater than ever. With online threats growing, more advanced phishing techniques, and identity theft, ensuring secure access to any service becomes difficult. It doesn’t matter how long or complex your password is, your account is at risk of being breached if it were to somehow fall into the wrong hands.

1faTo this end, we are very pleased to announce the general availability of two-factor authentication for Cloud-A accounts. Our development team had been working on building an OTP solution into Keystone, our authentication service, and released it into beta late last year. After months of end-user testing, and security auditing by third parties, we are enabling the feature for all users.

Two-factor authentication, or 2FA, by its definition allows you to secure your account via a second “factor” rather than just a password. Because passwords can be read or stolen, and are a single piece of information that any malicious person needs to access your account, a second factor called One Time Passwords or OTP are used and linked to a physical device that is on your person — so you know that the person logging in is truly you. This added security will thwart would-be attackers even if they know your account password.

Available Today


2FA for Cloud-A allows you enable 2FA from within your Cloud-A Account Settings in the client portal. This will generate your private key and show you your QR code and recovery codes, as well as provide you with a quick OTP test mechanism to confirm your settings. Once enabled, you can use our 2FA with any Google Authenticator compatible mobile application. We highly recommend FreeOTP for managing your OTP credentials. It is free, secure, standards-compliant, and open source. The app is available for download on Google Play for Android, as well as the App Store for iOS devices.

As previously noted, Cloud-A’s 2FA architecture is built into Keystone, meaning that two-factor authentication is available at both the web dashboard level and also at the API layer. The result is a completely new architecture, and new way to approach OpenStack authentication. We hope that this not only shows our commitment to on-going product development for our customers, but our commitment to the OpenStack project as a whole.

Users will not be forced to enable OTP on their accounts, however we highly recommend setting it up. You can read more information on the configuration process in our documentation portal. Taking a few minutes to enable this feature on your account could mean the difference between an adversary accessing your account and gaining access to your cloud infrastructure, and stopping them right at the door.

If you have any questions or concerns about configuring 2FA on your account, we’d love to hear from you! You can reach our support team quickly and easily by emailing support@clouda.ca.

Cloud-A @ OpenStack Summit – A Week in Review

summit collage (1)

As some of our followers might know, the Cloud-A team spent last week in Vancouver for the OpenStack Summit. It was our first summit, and it was nice for it to be in our home country. The summit provided us with the opportunity to interact first hand with our community, listen to keynotes about the direction of our technology, input from industry leaders, and vendors about new and emerging technology in the OpenStack ecosystem. All in all, there was some very familiar messaging about how OpenStack has grown past infancy and is enterprise ready – something that we at Cloud-A have not doubted. To be perfectly honest we don’t feel that there was much announced that was in the category of groundbreaking, but still we wanted to highlight some of the themes and news from last weeks OpenStack summit in terms of future value for our customers.

Read more

Time to Spring Clean your SAN – Bulk Storage for Data Archival

It is no secret that the amount of the data everywhere is growing. Terms like IoT, Big Data and M2M have a certain hype cycle around them, but at the end of the day they are all very relevant concepts. The requirements for data storage are growing at over 50% per year and IDC and EMC predict that the “digital universe” will amount to over 40,000EB, or 5,200GB per person on the planet, by 2020 – the majority of which is unstructured data.

Data Management and Internal Cost per GB

One of the biggest struggles for organizations in the coming years will be to manage these large, growing pools of data. Most enterprises utilize Storage Area Network (SAN technology) for their primary internal storage requirements, which leverage super fast storage media (SSD, Flash) and fibre channel networking to offer lightning fast IOPS and redundancy. The problem with SAN technology is that it is expensive. Every organization will have a different cost per gigabyte of storage, as it depends on many factors including: the type of SAN (FC/iSCSI,) the manufacturer and model, Capacity licensing , Support requirements and Type/capacity/speed of disks supported
Read more

Continuous Integration and Cloud-A

con·tin·u·ous in·te·gra·tion

k?n?tinyo?o?s/ ?in(t)???r?SH(?)n/

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Business Benefits of CI

CI Reducing Risk

When you integrate code frequently, you reduce the risk level of any project, though it is essential to have metrics in place to measure the health of the application so that defects in code can be detected sooner and fixed quicker. Frequent integration also means that the gap between the application’s current state and the state of the application in development is smaller.
Read more

DevOps Tools for the Canadian Cloud

At the forefront of modern application architecture is a concept known as DevOps, or the marrying of development and operations into one harmonious, collaborative and cooperative effort, all with the goal of rapidly producing software and improving operations performance.

As a result of the trend toward DevOps within organizations, a market of DevOps tools have emerged that enable automation and faster, higher quality software. Here is a list of DevOps tools that our partners, our customers and even the Cloud-A team use on a daily basis to deliver software.


Chef is a systems and cloud infrastructure framework that automates the building, deploying, and management of infrastructure via short, repeatable scripts called “recipes.” But the real power of Chef may be in its use of pluggable configuration modules (aka cookbooks), nearly 2,000 of which are available via the Chef community. High-profile Chef user Facebook recently open-sourced some of its own Chef cookbooks, including its Taste Tester testing framework and Grocery Delivery, which watches a source code repo, such as Git, and keeps the local Chef server in sync.

The University of Pennsylvania’s Wharton School is a Chef user as well. “Chef automates complex tasks that are otherwise time- and resource-intensive, but more importantly it allows us to focus our efforts on innovating and improving the quality of our services,” says Sanjay Modi, a technical director at the school, in a case study on Chef’s website. “It also opens the door to more collaboration and efficiency across the organization.” Chef has been used by Wharton to automate configuration management for Amazon EC2 resources, Linux nodes, and local virtual machines.


Docker brings portability to applications via its containerization technology, wherein applications run in self-contained units that can be moved across platforms. It consists of Docker Engine, which is a lightweight runtime and packaging tool, and Docker Hub, a cloud service for application-sharing and workflow automation.

“Docker has been a vital part of Yelp’s next-generation testing and service management infrastructure,” says Sam Eaton, engineering director at Yelp, in a case study on the Docker website. “Isolation of dependencies and rapid spin up of containers has allowed us to shorten development cycles and increase testing speed by more than four times.”



Puppet Enterprise, from Puppet Labs, offers data center orchestration by automating configuration and management of machines and software. Version 3.7, the latest release, features Puppet Apps, purpose-built applications for IT automation, including Node Manager, for managing large numbers of systems that are changed often. An open source version of Puppet is also available.

Stanford University uses the open source version of Puppet “to bridge the gap between the software development that we need to do to create new kinds of digital library services and the systems administration that we need to do to keep those services running in a high-performant, secure way,” says Stanford’s Bess Sadler, in a video testimonial on Puppet’s website. Developers have become more involved in systems administration, while systems admins have deepened their involvement in software development, enabling quicker development of applications, she says.


Ansible is an open source IT configuration management, deployment, and orchestration tool. It is unique from other management tools in many respects, aiming to provide large productivity gains to a wide variety of automation challenges. While Ansible provides more productive drop-in replacements for many core capabilities in other automation solutions, it also seeks to solve other major unsolved IT challenges. These include clear orchestration of complex multitier workflows and cleanly unifying OS configuration and application software deployment under a single banner. Hootsuite uses Ansible to build any server from scratch and repeat this as many times as they want within their infrastructure and are looking to expand use for app deployment. “Ops and Devs both feel safer, literally. Before they were always worried about ‘what if the server dies’, but not any more after all servers are properly ‘Ansiblized’.” says Beier Cai, Director of Technology at HootSuite Media, Inc.


One DevOps tool that is close to our hearts at Cloud-A is docrane – and that is because it is our project! Docrane is an open source Docker container manager that relies on etcd to provide relevant configuration details. The program watches for changes in configuration and automatically stops, removes, recreates and starts your Docker containers, enabling more seamless continuous integration. “Docrane has allowed us to significantly accelerate and improve our Continuous Integration process at Cloud-A. When managing an increasingly large network of OpenStack nodes, having the ability to ensure that new versions are deployed quickly and reliably is key,” says Jacob Godin, CTO of Cloud-A.

Stay Tuned…

Over the next several weeks, we will be highlighting effective use cases of these DevOps tools with Cloud-A including tutorials in an effort to educate and promote modern application architecture, faster time to market and better quality software.


Krill, Paul. ‘7 Cool Tools For Doing Devops Right’. InfoWorld 2015. Web. 20 Apr. 2015.


Cloud-A at the Atlantic Security Conference

This week the Cloud-A team will be at the 5th annual Atlantic Security Conference in Halifax, Nova Scotia. The Atlantic Security Conference was recently listed as one of The Top 50 Must-Attend Information Security Conferences in the world by Digital Guardian, and we are looking forward to interacting with Atlantic Canada’s infosec community, representing the OpenStack community and promoting the use of true cloud computing.

This year Cloud-A is offering a free one hour DevOps consulting session for any organization who attends AtlSecCon and signs up for a Cloud-A account.

Stop by our table and say Hi! For anyone unable to make it, check our twitter account (@CDNCloudA) for live AtlSecCon updates.

How Canadian Government & Crown Corps are using Cloud-A

ParliamentHillWe’ve seen a fair amount of growth in the government sector recently and as result thought it would be a good idea to help describe how our clients are leveraging the first Public Cloud that was founded in Canada by Canadians. We’ve seen some great steps recently from the Canadian Government to facilitate Shared Services standards that envision inclusion and usage of public Cloud providers like Cloud A in Government infrastructure. Recent Canadian Government documents like the “IT Shared Services Security Domain & Zones Architecture” specify the standards & best practice guidelines so that in the future shared ITC services can be transposable for the use of similar shared services offered through a public cloud provider under contract to the GC.
Read more

Linux security best practices for running your server in the cloud

At Cloud-A we enable our users to signup and manage their own infrastructure, giving them full control to configure and secure their own instances, networks and storage as they wish. We like to provide tips, tricks and best practises to give you the information you need to ensure that your instances are secure. Here are a few best practises for hardening and securing your Linux instances on Cloud-A.

Eliminate Unneeded Service

  • Do not run any unneeded services such as FTP.
  • If you are running DNS, be sure to close it off from being an open resolver so that you do not become part of a DDoS attack.

Lock down SSH

  • Disable root login via SSH
  • Only allow specified IPs to connect via SSH
  • Only allow SSH Key based authentication  – Do not allow password authentication
  • Use an alternative SSH port

Use fail2ban


  • Use fail2ban to automatically add malicious IPs to the firewall drop rules.

Update packages on regular basis

  • Keep your packages up to date to avoid being susceptible to zero day attacks.

More Links:






Don’t forget the automation! Maximizing your value of the public cloud

We’ve grown massively at Cloud-A since our launch in 2013, about 25% month-over-month on average and there seems to be no slowing down so far. Our goal was to bring true public cloud to Canada with no tradeoffs in functionality, performance or security when compared to the global market leading public clouds, and at least to some degree we’ve successfully achieved that goal.

Looking back at the last year, analyzing our customer base and interfacing with partners and clients, I’ve learned a lot about why and how organizations are consuming our infrastructure-as-a-service. Canadian residency, utility billing, and the ability to deploy, scale and manage infrastructure on demand are the usual responses, which positions us well against virtualization platforms and “legacy” clouds, but I find it amazing how few clients are using Cloud-A to its full extent and maximizing their value. The beauty of Cloud-A, and OpenStack in general, is the elasticity and agility it can provide organizations if they are leveraging all of its functionality.

Advanced Functionality via Automation

Flexibility is baked into OpenStack’s architecture. By definition there is literally an API for every aspect of OpenStack allowing users the ability to design and manage their own infrastructure. When fully understood and appreciated this aspect of cloud technology is where the ultimate value is. That’s the point when users typically realize it’s not just about moving existing applications to a virtualized “Cloud” server but that storage, compute, and SDN networking can be harnessed independently to maximize efficiencies and thus business value. IT requires a different perspective, one that holistically encompasses the business case and then architects the IT solution around it, automating the workflows and taking advantage of the utility model so that scalability is in effect no longer a concern as the old paradigm of hardware is no longer the limiting factor.

High Availability

Utilizing Cloud-A load balancers and splitting workloads between multiple instances, or federating your Cloud-A infrastructure with another cloud provider can allow for high availability, virtually eliminating downtime.

Horizontal Scalability (scale out)


Horizontal scalability is the clustering of multiple instances to expand the size of a deployment. The process of horizontally scaling your Cloud-A infrastructure can be automated to grow as required with the proper utilization of Cloud-A APIs and DevOps tools.

Vertical Scalability (scale up)


Cloud-A environments can be manually vertically scaled quite simply by utilizing snapshot functionality, but at scale, automating this process can be hugely beneficial, allowing the compute resources of your existing infrastructure to expand or contract automatically as required.

How do we Automate?


The wildly growing movement towards the automation of infrastructure operations for the goal of efficient and focused development is known as DevOps. “DevOps culture” is being adopted in enterprises all over the world, and a massive emergence of DevOps tools and services are supporting this movement of marrying development with operations.

DevOps Tools

There are many tools available to help with the process of automating the deployment and ongoing management of infrastructure on Cloud-A. The following tools are relevant not only to those who are building applications, but for anyone running infrastructure in an OpenStack public cloud at any scale.

Ansible – Ansible is an open-source software platform for configuring and managing computers. It combines multi-node software deployment, ad hoc task execution, and configuration management.

Puppet – Puppet is a configuration management system that allows sysadmins to define the state of their infrastructure and then automatically enforces that state. Puppet automates time consuming processes usually performed manually by sysadmins.

Chef – Chef allows users to automate how organizations build, deploy and manage their infrastructure by turning that infrastructure into code. Chef relies on reusable definitions known as recipes to automate infrastructure tasks.

Docker – Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

docrane (https://github.com/CloudBrewery/docrane) docrane is an open source project started by Cloud-A co-founder Jacob Godin. docrane is a Docker container manager that relies on etcd (a distributed key->value store) to provide relevant configuration details. It watches for changes in configuration and automatically stops, removes, recreates, and starts Docker containers.


Platform-as-a-service providers (PaaS) use automation to provide preconfigured and optimized platforms to develop, run and manage web applications without the complexities of building and configuring the stack manually.

Cloud 66 (www.cloud66.com) – Cloud 66 is Cloud-A’s PaaS partner of choice. Cloud 66 provides everything users need to deploy, scale and protect their applications on Cloud-A in a single tool. Cloud 66 currently supports Ruby frameworks such as Ruby on Rails, Sinatra and Padrino as well as Docker containers.

DevOps Services

There are many organizations who focus on providing DevOps services that include architecting development environments, implementing those environments, and coaching the implementation of DevOps culture within organizations.

Lyrical Security (www.lyricalsoftware.com) – Lyrical Software is Cloud-A’s go-to DevOps service partner. Lyrical is a well-oiled group of DevOps Engineers, based in Toronto, Canada. With a history building and deploying software since the late 90’s, Lyrical has experience automating at scale, deploying into clouds, private datacenters, even shrink-wrap software, all while supporting several languages.

Server Snapshots on Cloud-A are now 74% FASTER!

Last week we launched a revolutionary new product at Cloud-A that we designed to dramatically increase the speed and efficiency of instance snapshots on Openstack cloud platforms. We built this product based on feedback we have received over the past year from our users and partners who demanded more efficient snapshotting.

There are a couple of inherent challenges with the default OpenStack’s snapshot capability out of the box including:

1. Snapshotting a VM requires the VM to be paused temporarily. Depending on how large a VM is, the pause could be anywhere from a few seconds to several minutes – which is unacceptable for a mission critical server.

2. OpenStack’s standard configuration is that the compute node hosting the VM being snapshotted also performs the snapshot operation. This can create what is called the “noisy neighbour effect,” where the process of snapshotting a VM can have a negative performance impact on another tenant’s VM.

With this new technology we’ve developed, the compute resources are isolated and allocated to tenant VMs from the compute resources that serve VM snapshots, increasing the speed of snapshots and decreasing the pause time of a VM by an average of 74%, as well as preventing the noisy neighbour effect.

This new product is now live on Cloud-A and available at no additional cost.