The Case for Cloud storage in Canada: For those who truly care about their clients data


These days storing data in the cloud is done on a fairly routine basis. That being said,  where the data is physically located has become more of a relevant consideration in recent months due to changes to US law. Different countries have different rules, in this post we explore the case for protecting your clients data privacy when it’s stored in Canada correctly.

Most people are aware that if your data is stored on servers in the United States, laws such as the Patriot Act, PRISM and the newest CLOUD act of 2018 all apply. In effect, they allow US agencies to review your files with minimal effort and while incognito.

The goal of the CLOUD act was designed to help American agencies solve the legal challenges of cross-border data collection and the application of US law in those foreign territories. The result is that this law now enables the US government to legally access data outside the US that is being managed by US companies. In effect, the data is considered under US custody & control. In addition, the same US agencies can enforce a communication ban for US companies to not disclose that the data has been collected. Specifically this legislation seems to have been written to resolve the problem of US agencies not technically having the legal right to sniff data in foreign territories as they have done in the past in Ireland →


What Does This Mean for Canadians and their privacy ?

Do you know who is in control of your data? This question is becoming increasingly important for Canadians. Sectors such as health and education (especially those in provinces with stricter laws data such as British Columbia) should pay particular attention to the location and availability of confidential customer information.

Some major cloud companies may look like trusted providers if they have data centers in Canada. But placement does not matter if these companies are owned in the United States.

Where your data is stored and who has access to it, the information you need to know. If you choose a service provider, Server Cloud Canada recommends asking the following questions:

  1. Although the data center is located in Canada, does cloud provider is a Canadian company that understands data sovereignty?
  2. Can I trust how the shipping provider will handle my data?
  3. Can I protect my data against unauthorized access or recovery?
  4. Can I trust that data is always in the host country?

Perhaps the one thing that most everyone can agree on (aside from the particularly sneaky way that the law was approved → ) is that the CLOUD act is one of the most controversial laws passed in the US in recent hostory. Uncertainty about it continues, such as: will this law be applied continuously to circumvent the Data Protection and Privacy Act, especially those in Europe? At this point, it’s not known how the CLOUD Act will stand up to scrutiny under PIPEDA or other Canadian privacy legislation but it’s likely that Canadian agencies will work in tandem with US agencies (in silence) to collect private data in Canada legally.

Alternatively if you use Canadian servers on Canadian soil, owned by a Canadian Company, you are under the jurisdiction of Canada and its laws which generally help you take much better care of your customers privacy in a more reasonable and transparent way.

More information:

Canada has provincial and federal laws that apply to cloud storage providers. At the federal level, there are two main laws that apply including:

The Canadian Privacy Act →

PIPEDA – Personal Information Protection and Electronic Documents Act →


Several Canadian provinces also have laws similar to PIPEDA including:


British Columbia:



What’s a Content Delivery Network & how to configure our 100% Canadian CDN

These days high performing websites & apps often have to provide a significant amount of static content to end users. This content is typically derived of images, stylesheets, JavaScript, installer files, game assets, and videos. As the number and size of these static assets increase, bandwidth increases and thus page load times increase. This in turn degrades the user’s browser experience and reduces the available capacity of the servers.

So, to dramatically reduce page runtime, improve performance, and reduce infrastructure and bandwidth costs, you can easily implement a Content Delivery Network (CDN) to cache these resources over a set of servers that are geographically distributed across Canada.

What is a CDN?

A Content Delivery Network is a set of geographically distributed servers that deliver consistent content to end users. This static content can be almost any type of data, but CDN networks are often used to connect web pages and associated files, video and audio broadcasts, and large software packages.


A CDN network consists of multiple points of presence (PoPs), each consisting of several edge servers that store their source inventory or host server assets. When a user visits your website and requests static assets such as images or JavaScript files, the CDN network directs your requests to the nearest perimeter (edge) server, where the content is displayed. If the edge server does not have cached or cached assets, CDN will look for and store the latest copy. If the CDN edge contains a cache entry for your assets (which happens most often if your website receives a moderate amount of traffic), it returns the cached copy to the end user in a fraction of the time it would take to get from the source web server.

This allows geographically distributed users to reduce the number of hops required to receive static content by accessing content directly from the adjacent edge cache. The result is far less latency, faster loading times for pages, and a significantly lower load on the source infrastructure.


How does the Bulk Storage CDN ( work?

When users request objects from your Bulk Storage containers at the domain, their client performs a DNS lookup. Based where the user is located geographically, the DNS server will intelligently respond with the CDN node address which is closest to them to make sure the user receives the lowest latency connection possible. Next, the local CDN pulls from its cache to serve the file directly from local fast storage, and ensures the local cache is up-to-date with the objects in your container


Benefits of using a CDN

Almost any site may benefit from leveraging a CDN, but the main reason for implementation is usually to load the bandwidth of the original servers on CDN servers and reduce the waiting time for distributed users geographically. Here are the top 5 most compelling reasons our clients tell us CDN is critical to how they deploy their Apps in the cloud.

  1. Origin Offload
    Primarily, a CDN will be dramatic to reduce bandwidth usage on your servers by loading static resources like images, videos, CSS, and JavaScript. A content delivery network is designed and optimized to serve static content, and customer requests for that content are sent to and served by Edge CDN servers. This has the added benefit of reducing the load on the original servers because the sheer number of get requests is at a much lower rate.
  2. Faster Load times
    If your user base is geographically dispersed and a non-trivial part of the traffic is from an external geographic area, a CDN can reduce latency by caching static assets on edge servers closer to your users. By reducing the distance between users and static content, you can deliver content faster to users and enhance your experience by increasing page load speed. For websites serving video content for example, where high latencies and slow loading times more directly impact user experience, these benefits are compounded.
  3. Manage traffic peaks and avoid downtime
    CDNs enable you to handle very high traffic spikes and allow burst traffic via load balancing requests on a large distributed network of edge servers. By loading and caching static content on a distribution network, you can accommodate multiple concurrent users with your existing infrastructure. For sites that use a single origin server, these large traffic spikes can often overwhelm the system, resulting in downtime and unplanned downtime. Transferring traffic to a highly accessible and redundant CDN infrastructure designed to handle varying levels of web traffic can increase the availability of your assets and content.
  4. Reduce costs
    Because managing static content usually accounts for most of the bandwidth usage, you may reduce your monthly infrastructure costs by loading these assets over a content delivery network. In addition to reducing bandwidth costs, a CDN can reduce server costs by reducing the burden on the original servers so that your existing infrastructure can evolve. We offer usage based billing at competitive prices @ $0.05 CAD per GB of data transfer with no usage commitments or minimums. Traffic between Bulk Storage and the CDN nodes for your files is not charged for, you only pay for public egress traffic.
  5. Increase security
    Another common use case for CDNs is DDoS attack mitigation. By leveraging a CDN you reduce the total attack vectors on your application. Our CDN service for Bulk Storage analyzes traffic across the continent for suspicious patterns, blocking malicious attack traffic while continuing to allow reputable user traffic through.


How to Enable CDN on your Bulk Storage Container

It’s a pretty simple two step process to put in place really. All you need to do is:

  1. Within the “More” Menu select “CDN Settings”

  2. Then enable CDN functionality on your container within the Control panel


A content delivery network can be a fast and effective way to improve the scalability and availability of your sites. By caching static assets on a geographically distributed network of optimized servers, you can dramatically reduce page load times and end-user latencies. In addition, CDNs allow you to dramatically reduce bandwidth consumption by absorbing user requests and responding from the edge buffer, reducing bandwidth and infrastructure costs.

With libraries that can enable support for third major frameworks like WordPress, Drupal, Django and Ruby on Rails, as well as additional features such as SSL/TLS, support for resource compression, and avoiding DoS attacks, CDNs can be a powerful tool for optimizing high traffic websites.

For technical documentation on the specifics of integrating the Bulk Storage CDN into your project, you can read our CDN documentation or if you run into an issue, you can get in touch with an expert by emailing

Lower Bulk Storage Pricing!

When it comes to Object Storage we know that consumers are looking to address three main factors.

As a reminder, the following features and benefits are included with all Bulk Storage accounts:

  • Unlimited number of containers and objects
  • Any file type can be uploaded
  • Supports unlimited amount of storage, billed by usage with large discounts by volume
  • Container Specific API Keys
  • DDoS Mitigation & CDN Support ( with intelligent routing 
  • Availability rate of 99.99+%
  • High file durability
  • OpenStack-compatible API
  • Expert Support team to share knowledge for any integration

Object Storage
Our “Bulk Storage” product, comparable to “S3” but leveraging “Openstack Swift”, is a distributed storage system for static data, such as virtual machine images, image storage, email storage, web assets, backups, and archives. Object Storage is different than traditional storage in that it’s not centralized and provides greater scalability, redundancy, and durability as a result.

What is the typical Use Case for Object Storage?
Object Storage was designed for Ad Hoc use. It’s ideal for scalable storage needs, a common example would be for storing images that are rendered within an application framework. Because the platform is API-driven it’s very easily integrated into your application for data management. Your data is never affected by a server or drive failure because the Bulk Storage platform (that leverages OpenStack Swift) replicates its content from other active nodes to new locations in the clusters no less than 3 times.

Our approach has and will always be to be 100% Canadian. That means all of our infrastructure (for all 3 nodes in Vancouver, Toronto, & Halifax) all our staff, and our entire ownership group is without exception entirely Canadian.

3 Models for Consuming Kubernetes in the Cloud

A Brief History on the Growth Explosion of Containers


A couple of years ago, Linux containers began to gain popularity among devs and ops folks alike. It was a win win scenario as the benefits of containers to both dev and ops are clear. As time moved along and the adoption of containers in production workloads exceeded many people’s expectations, the world needed a way to manage container lifecycles at scale.

Enter Kubernetes, while it is not the only proven open source container manager out there (see Docker Swarm & Mesos,) it certainly has the majority share of voice in the community. Originally a Google project that was a core component of their internal Borg system, it has since been open sourced and embraced by companies like Microsoft and Red Hat (OpenShift is powered by Kubernetes.) Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

Read more

3 Application Architectures for Successful Disaster Recovery in the Cloud

Over the past few years, we’ve written several blog posts about various disaster recovery methods for your applications running atop of Cloud-A’s infrastructure. If you take a look at these various articles, which highlight software and tools to help you achieve your disaster recovery requirements, you’ll notice that there there is more than one way to skin this cat.

The method you use to backup and recover your applications and data should vary depending on the technical requirements of a given application and/or data store and your organization’s tolerance for downtime for that app or data store.

Let’s dig into three conceptual models for disaster recovery on Cloud-A. We’ve ranked these methods 1-5 by data resiliency, time to recover and cost.

Read more

Automatic Server Backups to Cloud-A Bulk Storage

If you’re looking for a cost effective, Canadian backup solution for your Windows desktops and servers, look no further! Our Bulk Storage service is an ultra-reliable, low-cost, and 100% Canadian hosted object storage service. When teamed with CloudBerry Labs’ Backup software, it allows administrators to easily backup their systems.

CloudBerry offers two types of backup products: CloudBerry Backup and CloudBerry Managed Backup. Today we’re going to be using their basic Backup product, however many of the settings we’re going to be going over will translate over to the managed product as well.

Install CloudBerry Backup

To get started, visit and download the Backup software onto the machine you wish to backup. Once the download has completed, you may be asked to install Microsoft Visual C++. You should go ahead and do so. Once that has been downloaded and installed, you’ll need to re-run the CloudBerry Backup setup.

Once setup is complete, click on the Finish button and CloudBerry Backup will launch. Initially, you’ll be asked if you would like to use the free trial or activate a paid license. If you have paid for CloudBerry already, go ahead and input your license information. Otherwise, you can continue with the free trial for now. Once you’ve completed that step, the CloudBerry app will open and you’ll be able to start setting everything up.

Read more

Running an OpenVPN Server on Cloud-A

The cloud is great for hosting internet-facing applications, but what if you require something that can only be accessed by a set list of users? Sure, you can maintain security groups with those users’ IP addresses, or use login pages to block access to unauthorized users. An alternative and more secure solution is the use of a Virtual Private Network (VPN) between a user’s local network and your Cloud-A private network. There are several VPN solutions available, both free and commercial. We’re going to show you how to use one of the most popular free VPNs available, OpenVPN.

Read more

2-Factor Authentication for Cloud-A

With the growth and adoption of Cloud-A’s infrastructure services around the world, having thousands of active projects, and twice that number of active users — our responsibility to provide a secure entry point into the services that store your application’s private data, that help run your businesses day-to-day, is greater than ever. With online threats growing, more advanced phishing techniques, and identity theft, ensuring secure access to any service becomes difficult. It doesn’t matter how long or complex your password is, your account is at risk of being breached if it were to somehow fall into the wrong hands.

1faTo this end, we are very pleased to announce the general availability of two-factor authentication for Cloud-A accounts. Our development team had been working on building an OTP solution into Keystone, our authentication service, and released it into beta late last year. After months of end-user testing, and security auditing by third parties, we are enabling the feature for all users.

Two-factor authentication, or 2FA, by its definition allows you to secure your account via a second “factor” rather than just a password. Because passwords can be read or stolen, and are a single piece of information that any malicious person needs to access your account, a second factor called One Time Passwords or OTP are used and linked to a physical device that is on your person — so you know that the person logging in is truly you. This added security will thwart would-be attackers even if they know your account password.

Available Today


2FA for Cloud-A allows you enable 2FA from within your Cloud-A Account Settings in the client portal. This will generate your private key and show you your QR code and recovery codes, as well as provide you with a quick OTP test mechanism to confirm your settings. Once enabled, you can use our 2FA with any Google Authenticator compatible mobile application. We highly recommend FreeOTP for managing your OTP credentials. It is free, secure, standards-compliant, and open source. The app is available for download on Google Play for Android, as well as the App Store for iOS devices.

As previously noted, Cloud-A’s 2FA architecture is built into Keystone, meaning that two-factor authentication is available at both the web dashboard level and also at the API layer. The result is a completely new architecture, and new way to approach OpenStack authentication. We hope that this not only shows our commitment to on-going product development for our customers, but our commitment to the OpenStack project as a whole.

Users will not be forced to enable OTP on their accounts, however we highly recommend setting it up. You can read more information on the configuration process in our documentation portal. Taking a few minutes to enable this feature on your account could mean the difference between an adversary accessing your account and gaining access to your cloud infrastructure, and stopping them right at the door.

If you have any questions or concerns about configuring 2FA on your account, we’d love to hear from you! You can reach our support team quickly and easily by emailing

OpenShift on CentOS 7 – Quick Installation

openshift-origin-logoThis is a quick guide to installing OpenShift Origin on a Cloud-A CentOS 7 instance. For the purposes of this tutorial, we are going to be using a single instance to perform an all-in-one installation. More advanced, clustered setup instructions can be found in the OpenShift Origin documentation.

What is OpenShift?

OpenShift is a Platform-as-a-Service (PaaS) developed by Redhat. PaaS augments your existing Cloud-A Infrastructure-as-a-Service (IaaS) by providing automated tools that assist developers in running an environment to host their applications on. In OpenShift’s case, this is provided by leveraging Docker and Kubernetes, giving you the ability to have custom, reusable application images. OpenShift also allows you to have highly available, self-healing, and auto-scaling applications without any of the manual setup that would typically need to be done in a traditional environment. 


This guide assumes that you have setup a CentOS 7 instance on Cloud-A and have associated a public IP address with it. In our tutorial, we are using a 4GB General Purpose instance.

Once you have your instance up and running, Docker will need to be installed along with a few requirements. Feel free to replace vim with your favourite text editor.

$ sudo yum install docker wget vim

We will then need to tell Docker to trust the registry that we are going to be using for OpenShift images. In the /etc/sysconfig/docker file we will need to change the following line:

# INSECURE_REGISTRY='--insecure-registry'



Once complete, save your changes and restart the docker service:

$ sudo systemctl restart docker

OpenShift will also require that our instance’s hostname can be resolved. Adding an entry to /etc/hosts will take care of this for us. If you don’t know your instance’s hostname, you can simply run the hostname command.

Old /etc/hosts:   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

New /etc/hosts:   <HOSTNAME> localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Read more

Cloud-A Launches Vancouver OpenStack Cloud Node


For immediate release

Cloud-A launches Vancouver, BC Nodeclouda-a-vancouver

Halifax, NS – July 21st, 2016 – Cloud A Computing Inc., a Canadian Public Cloud provider, announced today the launch of its first Western Canada cloud infrastructure node in Vancouver, British Columbia. The new node allows Cloud-A to better service the entire country with lower latency connectivity to its western customers, as well as satisfying data residency requirements for companies in BC.

The new node features all key Infrastructure-as-a-Service (“IaaS”) components, including compute, network, and storage. Similar to their existing Halifax, NS node, the BC node is built on Cloud-A’s OpenStack-based cloud platform. This platform gives customers the ability to spin up cloud infrastructure in a matter of minutes from both an easy-to-use web-based GUI and a powerful set of APIs.

“From day one, our mission has been to do be Canada’s cloud IaaS provider. We are now the first provider to truly launch a full multi-node, OpenStack cloud in Canada. This opens new doors for Canadian businesses who are looking for geographically dispersed data without the headaches of using legacy IT infrastructure.” says Jacob Godin, CEO of Cloud-A

The Vancouver, BC node was driven by the rapidly growing healthcare industry in BC, and was developed through a partnership with local healthcare-focused service company, Intogrey. Cloud-A is also extremely excited to help application development shops, IT management companies, and IT departments by providing a robust platform to use as a foundation for their products and services.


About Cloud-A

Cloud-A is the leading provider of public cloud Infrastructure based in Canada. Their products automate & simplify the installation and management of the hardware and software that provides the infrastructure for large scale environments having hundreds or thousands of servers supporting high performance compute applications. For more information visit



Brandon Kolybaba, CMO