Untitled drawing (3)
Be it capital intensive in-house solutions powered by VMware and SAN technology or re-selling contracted collocated infrastructure, managed service providers have been offering some version of the “cloud” for the past decade or so.

Meanwhile, in a not-so-different industry, application developers have been leveraging  “new” cloud technology which allows for self serving, on-demand infrastructure, requires no upfront equipment cost, allows you to pay for what you use,  and with no buyer-lock in. This is known as the public cloud – powered by Openstack.

Many of the benefits of an Openstack public cloud that software development companies have been enjoying since the inception of the technology can also be enjoyed by a managed service provider. The disconnect? Many of the features of an Openstack public cloud that create these benefits are features that historically, have only mattered to a developer. The reality is, these features can be extremely valuable to an MSP as well.


OpenStack is an opensource project and a community of thousands of developers who contribute to the ongoing growth of the product. While opensource isn’t typically a concept discussed in the IT channel, the concept ensures rapid and ongoing innovation, which in turn allows a MSP to introduce new functionality and features to their own clients.

In a highly competitive industry like the IT channel, differentiating your service from your competitors can make the difference between being competitive and being the leader. Offering your client base innovative, leading edge products and services creates value for your clients and a competitive edge for your business.

API Driven

APIs are what developers use to automate the process of connecting one application to another. Developers use APIs to link functionality of their products to existing products so that their end users don’t have to do it manually. Why would APIs matter to an MSP? Many of the manual, labour intensive processes MSPs would typically perform to manage their client’s infrastructure can be automated with an API driven public cloud.

More and more public cloud friendly applications are coming to market that integrate directly with public clouds through their APIs. Take Cloudberry Lab (www.cloudberrylab.com) as one example. Cloudberry makes products that synchronize and/or backup local systems to Openstack public clouds, among others. This functionality is driven by APIs.

API driven public clouds, and the abundance of available third party applications are enabling MSPs to expand their product and service portfolio, automate laborious processes and create more value for their clients.

Utility Billing

Developers enjoy the benefit of the bill-by-the-minute pricing model of the public cloud for building products, test environments, and other workflows where instances aren’t required to be powered on 24/7.

Utility billing sets MSPs free from hardware staging and allows them to avoid using expensive, production equipment for proof of concept testing and client demonstrations. The economics of the utility model also prevent MSPs from incurring long term colocation or dedicated server contracts, allowing them to add infrastructure as they add clients, and scale back infrastructure when it isn’t needed.

Call to Action

Cloud technology has changed, and it has created an excellent opportunity for MSPs to revolutionize their service delivery model with modern technology, streamlined process, reduced service labour costs, and more attractive economics. Developers have been realizing these benefits for years, and the time is now for MSPs to do the same to gain a competitive edge in their markets.

Installing ownCloud Desktop Client

Installing ownCloud Desktop Client

If you have been following our blog you will know that we have recently published two posts on ownCloud. The first, “Deploying ownCloud on Cloud-A” was a tutorial on how to install and configure ownCloud on a Windows 2008 R2 instance on Cloud-A and the second titled: “ownCloud: Infinite Expandability with Cloud-A’s Bulk Storage” was how to expand your ownCloud deployment with our bulk storage powered by Swift. Today we are going to show you how to install the ownCloud desktop client for OSX and Windows Server 2008 R2 (instructions will be the same for Windows 7.)


Download and Install Desktop Client

You will need to download the appropriate ownCloud desktop client from  https://owncloud.org/install/. Once your download has completed, run the installer for the ownCloud desktop client.

Authenticate to your ownCloud Server

Upon completion of the installation you will need to authenticate to your ownCloud server with the correct IP address.


Next, you will need to authenticate with your ownCloud credentials.


Configure Settings

At this point you can choose your folder syncing preferences. Depending on your preference, you can choose from syncing everything from your ownCloud server or just specific files and folders.


Much like Dropbox, ownCloud will create a cloud-syncing local drive on your desktop. An ownCloud folder shortcut will appear in the top menu bar as well as your Favorites under in Finder. In Windows, an ownCloud folder shortcut will appear in the tray as well as your Favorites under in My Computer.

Next Steps

At this point in our ownCloud blog series you have learned how to create an ownCloud server on a Cloud-A Windows instance, expand the storage space with Bulk Storage and configure desktop clients. To take it one step further and enable your users for mobility you can download and configure mobile apps for iOS and Android.



In the spirit of security, and in light of our recent feature release of Bulk Storage Container API Keys — we have forked and released a new version of Duplicity, the most popular Bulk Storage backup utility on Cloud-A.

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

Our version features a new clouda:// storage backend that supports the use of our Bulk Storage Container Keys, so you don’t have to embed your credentials with your deployed application doing the backups, and can securely use the generated container full key to do your backups.


You can download the latest stable version from the GitHub repository archive page, install the necessary dependencies, then run the installer.

pip install python-swiftclient lockfile
wget https://github.com/CloudBrewery/duplicity-swiftkeys/archive/373.tar.gz
tar -zxvf 321.tar.gz
cd clouda-duplicity-xxx/
python setup.py install


Where historically, you would have to use the SWIFT_ environment variables to store all of your authentication information, the new backend only requires two variables to run securely.

export CLOUDA_STORAGE_URL="https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH<tenant>"
duplicity [--options] <backup_path> clouda://<container_name>

You can get your tenant_id filled bulk storage URL from the Dashboard under API Access and listed as Object Store, and generate your Full Token from the container list screen under Manage Access.

The new backend supports all advanced duplicity functionality, including full and incremental backups, and GnuGPG encrypted backups. We can’t wait to hear your feedback on this project, and hear about other OpenStack swift third-party tools that you currently use, which we can help offer a Cloud-A secure version of. As always, you can reach our support team at support@clouda.ca, or on twitter at @CDNCloudA.


We have been receiving a lot of questions from prospective partners about how we comply with the rules around private information as it pertains to healthcare in Canada. The simple answer is that we provide secure and redundant infrastructure to our healthcare partners and work with them to recommend best practices and procedures for securing their own virtual instances that reside on our public cloud.

While that explanation might seem broad for such an important topic, the laws pertaining to personal information protection in Canada are complicated, technically nonspecific, and just plain hard to grasp.

We have created this “Guide to Protecting Private Healthcare Information in the Canadian Public Cloud” to help inform our partners and clients about how they can use our public cloud and be in compliance with Canadian privacy laws.

Download Cloud­-A’s Guide to Protecting Private Healthcare Information in the Canadian Public Cloud

Cloud-A’s Guide to Protecting Private Healchare Information in the Canadian Public Cloud

We have been receiving a lot of questions from prospective partners about how we comply with the rules around private information as it pertains to healthcare in Canada. The simple answer is that we provide secure and redundant infrastructure to our healthcare partners and work with them to recommend best practices and procedures for securing their own virtual instances that reside on our public cloud.

While that explanation might seem broad for such an important topic, the laws pertaining to personal information protection in Canada are complicated, technically nonspecific, and just plain hard to grasp.

We have created this “Guide to Protecting Private Healthcare Information in the Canadian Public Cloud” to help inform our partners and clients about how they can use our public cloud and be in compliance with Canadian privacy laws.

Download Cloud­-A’s Guide to Protecting Private Healthcare Information in the Canadian Public Cloud

Encrypted Volumes: Linux Edition

Having your data encrypted at rest is crucial for a secure application, especially when you are reporting to a governing body for IT security standards in the Healthcare or Financial markets. Our VPC networking ensures all private network traffic is fully encrypted per customer network, which ensures your data is encrypted over the wire between VMs. Having encrypted volumes will add an extra layer of security to ensure even a somehow compromised volume is rendered unreadable.

We’re going to review encrypting your Cloud-A SSD volume on both a Redhat based, and Debian based Linux distribution using the dm-crypt package — the gold standard for modern Linux disk encryption. dm-crypt is a kernel level transparent disk encryption system, only worrying about block level encryption without ever interpreting data itself. This gives dm-crypt the availability to encrypt any block device, from root disks and attached volumes, to even swap space.

Create your SSD Volume

Head to your Cloud-A dashboard, and create a new Volume under “Servers -> Volumes”. We’ll create a 120GB volume named “Encrypted Disk 1?.

Screenshot - 11172014 - 02:10:52 PM

Now that we have the drive in place, we’ll attach it to a server to configure the disk encryption.

Screenshot - 11172014 - 02:11:51 PM
Screenshot - 11172014 - 02:14:27 PM

Linux Setup: Ubuntu 14.04 & Fedora 20

Our Ubuntu and Fedora images ship with the necessary tools to do encryption out of the box, we’re going to use the cryptsetup package to create the encrypted block device, create a password, and make it available to mount via the mapper.

sudo cryptsetup -y luksFormat /dev/vdb

Cryptsetup will prompt you to overwrite the block device, and be irreparable. Type “YES” in all caps to confirm. You’ll next be asked to provide a password for decrypting your data, make sure you choose a strong password and store it somewhere safe.

If you lose your password, you will in effect have lost all of your data.

Now we can use cryptsetup luksOpen to open the encrypted disk, and have the device mapper make it available. When you run this command, you’ll need to provide the password you entered in the last step.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol

Next, we’re able to interact with the disk per usual at the /dev/mapper/encrypted_vol location created in the last step. Since the encryption is done transparently, you don’t need to do anything special from here on out to keep your data encrypted and safe. We’ll create a simple journaled Ext4 filesystem, and mount it to /data for the application server to use.

sudo mkfs.ext4 -j /dev/mapper/encrypted_vol
sudo mkdir /data
sudo mount /dev/mapper/encrypted_vol /data

Your disk is ready. You can check that it’s mounted and how much space is available using df -h.

Filesystem                 Size  Used Avail Use% Mounted on
/dev/vda1                   60G   22G   38G  36% /
/dev/mapper/encrypted_vol  118G   18M  112G   1% /data

You can now configure your database data directory, or application user data to point to your new encrypted /data directory to store your sensitive data.

Next Steps

When you are done with the volume, you can close it by unmounting and using luksClose to remove the mapping and deauthenticate the mount point so it cannot be accessed until re-authenticating.

sudo umount /data
sudo luksClose encrypted_vol

And to re-authenticate in the future on this, or any other VM you simply need to use luksOpen with your decryption password and mount to your desired location.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol
sudo mount /dev/mapper/encrypted_vol /data

This should help you get on your way to a more secure installation of your application with securely storing your application data on a high performance SSD volume. At any time, you can use the Volume Live Snapshot function in the dashboard to capture a snapshot of this volume to maintain encrypted backups that you can restore to a new volume at any time.

Encrypted disks are not the only security measure you should be taking into account when deploying your infrastructure, but it is a crucial step in the right direction for every security conscious deployment scenario.


Today’s post is not to be confused with our post from back in May titled: “Backups With Snapshots” in fact, think of this as an extension of it. Snapshots provide you with a point in time image of your server which gives you system redundancy as you can easily and quickly spin up a new instance based on any of your saved snapshots.  With all of that said, there are a few things to consider when using snapshots of your server as your sole backup process.


The server that is being snapshotted will be paused temporarily during the snapshot process. While this pause time can be minimal, it might not be ideal for a server providing mission critical services. Because a snapshot of an server instance includes the whole system (operating system and data) the process can take between 1 and 10 minutes to complete depending on the total consumed disk space of the server instance before your server is resumed.


There is a cost associated with storing server snapshots. Server snapshots cost $0.15 per GB per month, billed for as long as the snapshot exists. You will only be charged for the compressed size of your snapshot — not the provisioned disk size.

Solution: Volume Snapshots

If the downtime and cost of server snapshots is not ideal for your application, the answer might just be using volumes and volume snapshots.

With this method we recommend that users keep their operating system on the original disk space that is included with the instance and use volumes to store their data. This allows you to take snapshots of your volume for backups of critical data, rather than the entire instance, and avoid the downtime associated with server snapshots.

In the case of a server instance requiring restoration, your recovery is as easy as deleting the server instance, launching a new server instance, and mounting the last successful volume snapshot to it.

Another effective use case is when data is accidentally deleted by a user. You have the ability to mount a previous volume snapshot to a temporary server instance, recover the deleted data and migrate it back to the production server.

If you wanted to go a step further, you can continue to store a single snapshot of your standard image, so in a case of a server issue, you will be able to launch a new instance based on your image with all of your system preferences and server roles intact.

The nicest thing about volume snapshots? They are free! Cloud-A does not charge for the storage of any volume snapshots, making them a cost effective backup solution.

Snapshot Schedule

Since there is no cost to storing volume snapshots with Cloud-A, you can store as many old volume snapshots as you would like. As the snapshots begin to pile up, properly labeling them will become increasingly important so that you know which snapshot is which.

The frequency of how often you snapshot your volumes is dependant on your organization’s tolerance for data loss and/or downtime. An organization with zero tolerance for downtime might require a daily snapshot of their server volume to provide them with several point in time instances of that volume. A less mission critical server volume, like a test environment, or an organization with a greater tolerance for downtime, may only require weekly snapshots.

Next Steps

We have partners and customers who are using a number of different backup methods with our infrastructure today. This really speaks to the flexibility Cloud-A’s Infrastructure-as-a-service offering. At the end of the day, the best backup processes are the ones that are recoverable and volume snapshots provide you with a cost effective, recoverable backup solution. We urge you to test it out for yourself!

ownCloud: Infinite Expandability with Cloud-A’s Bulk Storage


We previously published a blog post on creating an ownCloud server on Cloud-A’s public cloud, but we would like to build upon that and show just how expandable and agile a Cloud-A hosted ownCloud deployment can be by introducing bulk storage.

By leveraging our Bulk Storage powered by Swift, users can expand the size of their ownCloud deployment very quickly and inexpensively to facilitate growth. Unlike a hardware deployment, where you would purchase drive space up front to account for future growth, a Cloud-A deployment will allow an organization to scale their storage as needed on a pay-as-you-go utility model.

Getting Started

We will begin with the assumption that you already have an ownCloud deployment running on Cloud-A with administrator access to the program.

Create an Object Storage Container

From your Cloud-A dashboard, select “Storage” and then “Containers.” Select “New Container,” and name the new container.

Configure External Storage in ownCloud

ownCloud comes prepackaged with external storage support, but the functionality must be enabled in the “apps” dashboard of your ownCloud instance.  In the “apps” dashboard select “External storage support” on the left-hand side bar and enable it.

This will populate an External Storage section in your ownCloud Admin panel. Select “OpenStack Object Storage” from the “External Storage” dropdown menu and fill enter the following credentials:

Folder Name: Name your storage mount point.

User: Your Cloud-A username (your email address)

Bucket : The name of your Cloud-A container

Region: “regionOne”

Key: Your Cloud-A username

Tenant: your email address

Password: Your Cloud-A password

Service_name: “swift”

URL: https://keystone.ca-ns-1.clouda.ca:8443/v2.0

Timeout: Timeout of HTTP requests in seconds (optional)

If you have correctly input the information above and ownCloud accepts that information, a green dot will appear to the left of the folder name.

Validate External Storage

To further validate the access to the new external storage, go back to the main ownCloud screen by clicking the ownCloud logo in the top left corner, and select external storage. You should see your newly created ownCloud folder which points to your Cloud-A object storage powered by Swift.

Next Steps

Adding additional external object storage to your Cloud-A hosted ownCloud instance sets you free from the traditional limitations of hardware, allowing you to scale on demand. This is an ideal solution for any growing company looking to have control of their own data, but also have that data stored securely in Canada.

Stay tuned for the next post in our ownCloud series.


Configuring a VPN server using Cloud-Init

While security groups are a good measure for locking down access to your network through firewall rules, in many cases it is necessary to configure a VPN between your Cloud-A Virtual Private Network and your office / individual computer. This can reduce the number of internet accessible resources and encrypt all of your traffic between sites.

We’re going to launch an Ubuntu 14.04 server and, using Cloud-Init, pre-configure it with the required packages to run your own VPN server. We’ll set up your VPC Firewall in a way to allow VPN traffic into your private network, and establish a connection from your VPN client.


Cloud-init enables you to leverage OpenStack’s metadata service to send instructions to your instance that will be executed upon launching. In this post, we’re going to leverage this functionality to have cloud-init install and configure our VPN for us on first boot. Here is what the final instruction set will look like:


  - pptpd

  - content: |
          myvpnuser pptpd mypassword *
    path: /etc/ppp/chap-secrets
  - content: |
          net.ipv4.ip_forward = 1
    path: /etc/sysctl.d/98-ip-forward.conf
  - content: |
          option /etc/ppp/pptpd-options
    path: /etc/pptpd.conf

  - [ 'iptables', '-t', 'nat', '-A', 'POSTROUTING', '-o', 'eth0', '-j', 'MASQUERADE' ]
  - [ 'iptables-save' ]
  - [ 'sysctl', '-p', '/etc/sysctl.d/98-ip-forward.conf' ]


Configuration Overview

The Cloud-Init configuration is driven through the cloud-config YAML file, which is marked by “#cloud-config” being the first line. There is a large section of examples in the Cloud-Init documentation. We’re going to walk through the different sections one at a time, explaining what each does and why it’s required to automate the deployment of your VPN server.



The first section “packages” will tell cloud-init what to pre-install for us. In this case, we’re going to use PPTP for our VPN connection, which pptpd will handle.

Write Files

Next, in the write_files section, we’re providing configuration files that are required for our VPN to work. You should to change myvpnuser and mypassword to reflect the login credentials that you would like your VPN client(s) to use.

Also, we are creating a virtual network to be used for the VPN service. This is done via the localip and remoteip options in /etc/pptpd.conf. Make sure that these values don’t overlap with your office / home network. Using obscure values like 192.168.183.x instead of 192.168.0.x may be a good idea.


Finally, runcmd is a list of commands that cloud-init will run late in the booting process. For our VPN, we need a simple iptables NAT rule and we need to enable IP forwarding in the Linux kernel, as it will be forwarding your traffic to your Cloud-A network.

Launching our Instance

The launch process is identical to launching a regular instance, with one final step at the end. So, we’re going to run through it quickly.

In our example, we’re going to create a new Ubuntu 14.04 server. Before launching, we’re going to go to the Cloud-Init tab and paste the instruction set that you’ve customized with your own values into the textbox. It will perform validation to ensure your formatting is still valid YAML when you create your VPN server.


At this point, we’re good to launch our instance! Once it has started, we need to allow PPTP traffic to pass through to the instance.  There is a single TCP port (1723) that needs to be opened. In this example, we’re going to create a separate security group called ‘PPTP‘.


Next up, we just need to add the security group to our instance and associate a Public IP address. Check the docs for more info on creating and assigning Security Groups.

Connecting to your VPN

Configuring the client is relatively simple, especially if we wish to route all traffic through our VPN connection. However, we’re going to configure our client in “split-tunnel mode“. This means that only traffic that is local to the VPN network (IE your Cloud-A instances) will be routed over the PPTP connection, while all other traffic will route as usual.


Open your Network Preferences and click on the “+” button under the list of network connections. This will bring up a dialog box which allows you to create a new network connection. Here, we’re going to want to select PPTP VPN and give our new connection a name.

Now we simply need to fill in our Public IP address and our username, click ‘Connect’, apply your changes, and enter your password. You’re now connected to your VPN server!

Next, we’re going to configure split tunneling. Apple doesn’t provide a pretty UI to do this, so we’re going to have to open terminal to do so, running: sudo route add -net -interface ppp0

As you can see from your terminal, we’re adding a route to your Cloud-A network ( by default) via the PPTP connection (ppp0). The traceroute tests routing to the Cloud-A virtual router. NOTE: On OSX, the sudo route add... command must be run every time we restart our machine. Otherwise, we will not have split routing into our remote Cloud-A network.

Next Steps

If you want to run office-to-cloud VPN, you’ll need to configure static PPTP on your internal network’s router. This way, you’re always connected to your Cloud-A VMs while you are in the office, and can act as if they’re on your local network. If you are running a router with DDWRT installed, there are some instructions here to get you started.

And there we have it! You have now securely connected into your virtual Cloud-A network. If you have any questions, or require assistance with anything VPN, drop us a message at support@clouda.ca.


We’re excited to announce the public availability of Container-specific API keys for our Bulk Storage service. Our team has developed and deployed this feature, which has been reviewed by OpenStack Swift core developers, to meet the specific application requirements for our customers. This will enable you to develop and deploy more secure applications on Cloud-A infrastructure by shipping with secure, revokable authentication keys.

Technical Overview

This feature will allow developers who leverage our Bulk Storage APIs to deploy more secure applications using access keys specific to the container(s) that each application needs to use. There is no longer a need to embed your Cloud-A authentication information when you deploy. Instead, you can generate secure keys on a per-container basis through the Dashboard with either read-only, or full read & write access — in case you need to give access to a third party to perform read operations on any object in a container.

Additionally, we are contributing the Swift middleware that we’ve developed back to the community! The source is available on our Bitbucket account, along with install instructions for enabling the middleware in your swift-proxy server.

Generating Secure Keys

We have extended our Dashboard to allow you to generate, set and revoke secure keys for all of your Bulk Storage containers from the main container screen. In the list of actions, you’ll now see a “Manage Access” action, which will display the currently-set API keys for the container, and an option to regenerate them. For additional security, the default behaviour is such that containers do not have any access keys set and you’ll need to generate initial keys to enable this feature. For your convenience, dashboard generated keys are prefixed with the permission and first four letters of your container name.

The regeneration function serves to revoke your current container credentials, and generate new secure keys. This will help squelch the threat of any potentially leaked credentials by immediately rejecting all requests using the old keys, and certainly not having to worry about leaked account passwords per the OpenStack Swift default access requirements.


Now that we have our keys, we can test that they work by using simple curl commands. We will test listing files in a container and downloading a test file by passing our new API key in the headers of the request. Note: It is important you use the Bulk Storage https endpoint if you are on an external network to ensure your headers cannot be read by a third party.

List a container

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test
< HTTP/1.1 200 OK
< X-Container-Object-Count: 1

Download a file

Downloading the index file is just as quick.

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test/index.html
< HTTP/1.1 200 OK
< Content-Length: 44
< Content-Type: text/html

Key revocation

And when we revoke the key in the dashboard, attempting the request again will return a 401 Unauthorizederror as expected.

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test/index.html
< HTTP/1.1 401 Unauthorized
401 Unauthorized: Auth Key invalid

Using Python-SwiftClient

In a deployment scenario, it’s likely that you won’t be using curl to fetch or upload objects. Thepython-swiftclient library is the official OpenStack library used to interact with Swift deployments, including our Bulk Storage service. These examples are using a Python 2.7 REPL.

Download a file

We’ll start by downloading the file we’ve been playing with via curl above using the read-only key, which has full read access to the container, but cannot perform any POST, PUT or DELETE requests. Notice that the second argument to get_object, which is normally the auth token is set to None, as this shared key mechanism is separate from the Keystone authentication backend.

>>> import swiftclient
>>> read_key ='read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a'
>>> response = swiftclient.get_object(
        'X-Container-Meta-Read-Key': read_key
>>> response[1]

Upload a file

Uploading a file using the full-key is just as easy, in this example we’ll upload a text file to Bulk Storage and read it back out again using the python-swiftclient library.

>>> import swiftclient
>>> full_key = 'full-TEST-1e1c1fca-16ce-4aba-b89c-3c8b7911d1c4'
>>> swiftclient.put_object(
    contents='this is a test file', 
    headers={'X-Container-Meta-Full-Key': full_key})
>>> response = swiftclient.get_object(
        'X-Container-Meta-Full-Key': full_key
>>> response[1]
'this is a test file'

If you have any questions about how to implement these changes into your application, please don’t hesitate to reach out to our support team. This will be our recommended approach for connecting to Bulk Storage from your application, so we want to ensure you’re feeling comfortable with the new features and using them.

Going Forward

We look forward to seeing all of the new integrations this feature enables for our clients. We’re continuously working and deploying new features to give you a competitive advantage when it comes to the automation of your infrastructure. Our team is committed to building our platform to meet your needs, and contributing the appropriate parts back to the OpenStack community whenever we can. Using OpenStack as a framework has helped us launch the most robust public cloud in Canada, and we’re not stopping there!

See you on the other side!