WordPress Backups for Beginners and Advanced Users

Backup MemeWe all know that backing up data is important. Whether it’s a corporate Windows file server, or our treasured family photos, we make sure that we can recover our data in the case of a hardware failure. Oddly enough, most folks tend to skip over their website data when considering their backup strategy.

Although WordPress is the most used CMS in the world, many users still struggle to find a good backup solution. Thankfully, using a combination of Cloud-A’s Bulk Storage and the popular Updraft Plus WordPress Backups plugin, automatically backing up and restoring your website is extremely easy and cost effective. This makes it an ideal solution for WordPress users of any skill level.

Read more

Migrating from the Heroku PaaS to Cloud 66 & Cloud-A


We come across clients all the time who have their applications built on Heroku, but have requirements from their clients to have their data resident in Canada. This was one of the drivers for creating the relationship we have with Cloud66. Here is a guide for migrating from Heroku to Cloud66 + Cloud-A.

About migrating from Heroku

Migrating your application from Heroku to Cloud 66 involves deploying your code, importing your data and redirecting your traffic to the new endpoint.

What server size do I need?

Using Heroku, you can choose between 1X (512 MB), 2X (1 GB) and PX (6 GB) server sizes. This makes it easy to calculate your server requirements, and we recommend that you use similar server resources when deploying your stack with Cloud 66. We also recommend that you have a separate server for your database in production environments.



1. Code

Simply provide Cloud 66 the URL to your Git repository so that it can be analyzed. For more information, seeAccessing your Git repository.

2. Data

Once your code is deployed, it’s time to migrate your data across. The process differs for PostgreSQL and MySQL databases:


From your Heroku toolbelt, create a database backup URL by running heroku pgbackups:url. Next, visit your stack detail page and click the Import Heroku data link. Paste the URL provided by the toolbelt into the field, and click Import Heroku data.


Start by dumping your existing database. Refer to the ClearDB documentation for common problems.

$ mysqldump -u [username] -p[password] [dbname] > backup.sql

Once you have a MySQL dump file, use the Cloud 66 toolbelt to upload the file to your stack database server. Remember to replace the fields below with your values.

$ cx upload -s "[stack_name]" [database_server_name] backup.sql /tmp/backup.sql

Next, use the toolbelt to SSH to your server.

$ cx ssh -s "[stack_name]" [server_first_name]

Finally, use the command below to import your backup into the database. You can find the generated username, password and database name by visting your stack detail page and clicking into your database server (eg. MySQL server).

$ mysql -u [generated_user_name] -p [generated_password] "[database_name]" < /tmp/backupfile.sql

3. Traffic

Once you’re ready to serve traffic from your Cloud 66 stack, you need to redirect your traffic to it. For more information, see Configure your DNS.

Useful pointers

Web server and Procfile

By default, Cloud 66 will deploy your stack with Phusion Passenger, but you can also choose acustom web server like Unicorn. You may have a web entry in your Procfile to do this on Heroku. Cloud 66 ignores this entry to avoid compatability issues.

To run a custom web server, we require a custom_web entry. It is important to set this before analyzing your stack, to avoid building the stack with Passenger.

You can also use the Procfile to define other background jobs.

Dyno recycling

Heroku restarts all dynos at 24 hours of uptime, which may conceal possible memory leaks in your application. When you migrate to Cloud 66, these will become noticeable because we don’t restart your workers (other than during a deployment), so the leak can grow to be bigger. A temporary solution is to re-create the Heroku restart behavior, for example with this script:

for OUTPUT in $(pgrep -f sidekiq); do kill -TERM $OUTPUT; done

This will send a TERM signal to any Sidekiq workers, giving them 10 seconds (by default) to finish gracefully. Any workers that don’t finish within this time period are forcefully terminated and their messages are sent back to Redis for future processing. You can customize this script to fit your needs, and add it to your stack as a shell add-in.

Note that this is a temporary solution, and we recommend that you use a server monitoring solution to identify the source of your leak.

Asset Pipeline Compilation

If you haven’t compiled assets locally, Heroku will attempt to run the assets:precompile task during slug compilation. Cloud 66 allows you to specify whether or not to run this during deployment.

Making your App Canadian Data Resident

Probably the simplest step in the whole process is ensuring that your application is Canadian data resident. You will need to create an account with Cloud-A. Then when you set up your stack on Cloud66 ensure that you select Cloud-A as your cloud provider of choice.

Ready to give it a try?

Get started with a $10 credit

Create my account!

Linux security best practices for running your server in the cloud

At Cloud-A we enable our users to signup and manage their own infrastructure, giving them full control to configure and secure their own instances, networks and storage as they wish. We like to provide tips, tricks and best practises to give you the information you need to ensure that your instances are secure. Here are a few best practises for hardening and securing your Linux instances on Cloud-A.

Eliminate Unneeded Service

  • Do not run any unneeded services such as FTP.
  • If you are running DNS, be sure to close it off from being an open resolver so that you do not become part of a DDoS attack.

Lock down SSH

  • Disable root login via SSH
  • Only allow specified IPs to connect via SSH
  • Only allow SSH Key based authentication  – Do not allow password authentication
  • Use an alternative SSH port

Use fail2ban


  • Use fail2ban to automatically add malicious IPs to the firewall drop rules.

Update packages on regular basis

  • Keep your packages up to date to avoid being susceptible to zero day attacks.

More Links:






Video Tutorial: Using Cloudberry Explorer with Cloud-A Bulk Storage

A couple of months ago we posted about CloudBerry explorer and some basics on how to set it up with Cloud-A Bulk Storage. Since then we have announced our partnership with CloudBerry after months of rigorous testing to ensure that CloudBerry’s OpenStack products integrate seamlessly with our Bulk Storage.

Check out our latest video tutorial on how to use CloudBerry Explorer with Cloud-A Bulk Storage. This solution provides a very simple interface for your Bulk Storage containers, allowing even non-technical end users to copy and move files from their local systems to the cloud.



Cloudberry Lab is a company that makes backup and file management software for hybrid cloud environments, allowing users to backup or sync files from their local systems to the public cloud. While Cloudberry has paid products for backing up Windows servers and applications, they offer a piece of freeware called Cloudberry Explorer, which is a file manager that allows you to sync files from your Windows system to a number of public cloud options including OpenStack.


Create Cloud-A Bulk Volume Container

CloudBerry Explorer for OpenStack is built on OpenStack Swift technology, which means that users can use it with Cloud-A’s Bulk Storage ($0.075 per GB per month). You will need to create at least one Bulk Storage container by navigating to the storage tab in the Cloud-A dashboard and selecting “New Container.” Appropriately name your container and you are ready to download Cloudberry Explorer.

Tip: To keep your cloud-synced files organized, we recommend creating multiple Bulk Storage containers and treat them as if they were a folder directory on your local system.


Download Cloudberry

Navigate to http://www.cloudberrylab.com/download-thanks.aspx?prod=cbosfree and download CloudBerry Explorer for OpenStack Storage.

Simply follow the steps to completed the installation wizard program.

Authenticate to your Bulk Storage Container

Once CloudBerry Explorer has launched you will notice that the left side of the screen represents your local systems folder directory and the right represents cloud storage. On the cloud storage side click the source drop down menu and select:
<New Storage Account>

Select Cloud-A


Then enter your specific credentials as follows:

  • Display name: email (Cloud-A login username)
  • User name: email (Cloud-A login username)
  • Api key: Cloud-A password
  • Authentication Service: https://keystone.ca-ns-1.clouda.ca:8443/v2.0/tokens
  • Tenant Name: email (Cloud-A login username)

Now Select “Test Connection” to ensure that the system has accepted your credentials.

If Test Connection fails, ensure that you have entered your credentials correctly. If you have entered your credentials correctly but are still receiving a “Connection Failed” error message, ensure that you have the correct ports open for Bulk Storage. Those ports are: 80, 443, 8443 and 8444.

If your credentials were entered correctly, the Bulk Storage container you created in the first step will appear in the file directory on the right side of the screen. To test the connection, select a test file from your local system, and select “Copy.” A transfer status message will appear briefly at the bottom of the screen and the file will copy from the left side of the screen and appear in your cloud storage container on the right.

To prove this concept, log into your Cloud-A dashboard and navigate to your new Bulk Storage container. You should see your test file.

Functional Use Cases:

  • Upload very large files, like 4K HD videos, disk images, or backup archives, in multiple pieces efficiently and have them downloaded / served as a single file using an Object Manifest to glue the data back together.
  • Archive data from old projects taking up unnecessary space on your production storage (CAD files, BIM files, PSD files.)
  • Use with Cloud-A Windows instances and move infrequently used, non-mission critical data of off high performing SSD volume storage.

Next Steps:

CloudBerry Explorer is a great way to manually sync files to Cloud-A, and a great introduction into hybrid cloud solutions. Check out some of CloudBerry Lab’s other products for more advanced features like scheduled backups and encrypting files.

Installing ownCloud Desktop Client

Installing ownCloud Desktop Client

If you have been following our blog you will know that we have recently published two posts on ownCloud. The first, “Deploying ownCloud on Cloud-A” was a tutorial on how to install and configure ownCloud on a Windows 2008 R2 instance on Cloud-A and the second titled: “ownCloud: Infinite Expandability with Cloud-A’s Bulk Storage” was how to expand your ownCloud deployment with our bulk storage powered by Swift. Today we are going to show you how to install the ownCloud desktop client for OSX and Windows Server 2008 R2 (instructions will be the same for Windows 7.)


Download and Install Desktop Client

You will need to download the appropriate ownCloud desktop client from  https://owncloud.org/install/. Once your download has completed, run the installer for the ownCloud desktop client.

Authenticate to your ownCloud Server

Upon completion of the installation you will need to authenticate to your ownCloud server with the correct IP address.


Next, you will need to authenticate with your ownCloud credentials.


Configure Settings

At this point you can choose your folder syncing preferences. Depending on your preference, you can choose from syncing everything from your ownCloud server or just specific files and folders.


Much like Dropbox, ownCloud will create a cloud-syncing local drive on your desktop. An ownCloud folder shortcut will appear in the top menu bar as well as your Favorites under in Finder. In Windows, an ownCloud folder shortcut will appear in the tray as well as your Favorites under in My Computer.

Next Steps

At this point in our ownCloud blog series you have learned how to create an ownCloud server on a Cloud-A Windows instance, expand the storage space with Bulk Storage and configure desktop clients. To take it one step further and enable your users for mobility you can download and configure mobile apps for iOS and Android.


Encrypted Volumes: Linux Edition

Having your data encrypted at rest is crucial for a secure application, especially when you are reporting to a governing body for IT security standards in the Healthcare or Financial markets. Our VPC networking ensures all private network traffic is fully encrypted per customer network, which ensures your data is encrypted over the wire between VMs. Having encrypted volumes will add an extra layer of security to ensure even a somehow compromised volume is rendered unreadable.

We’re going to review encrypting your Cloud-A SSD volume on both a Redhat based, and Debian based Linux distribution using the dm-crypt package — the gold standard for modern Linux disk encryption. dm-crypt is a kernel level transparent disk encryption system, only worrying about block level encryption without ever interpreting data itself. This gives dm-crypt the availability to encrypt any block device, from root disks and attached volumes, to even swap space.

Create your SSD Volume

Head to your Cloud-A dashboard, and create a new Volume under “Servers -> Volumes”. We’ll create a 120GB volume named “Encrypted Disk 1?.

Screenshot - 11172014 - 02:10:52 PM

Now that we have the drive in place, we’ll attach it to a server to configure the disk encryption.

Screenshot - 11172014 - 02:11:51 PM
Screenshot - 11172014 - 02:14:27 PM

Linux Setup: Ubuntu 14.04 & Fedora 20

Our Ubuntu and Fedora images ship with the necessary tools to do encryption out of the box, we’re going to use the cryptsetup package to create the encrypted block device, create a password, and make it available to mount via the mapper.

sudo cryptsetup -y luksFormat /dev/vdb

Cryptsetup will prompt you to overwrite the block device, and be irreparable. Type “YES” in all caps to confirm. You’ll next be asked to provide a password for decrypting your data, make sure you choose a strong password and store it somewhere safe.

If you lose your password, you will in effect have lost all of your data.

Now we can use cryptsetup luksOpen to open the encrypted disk, and have the device mapper make it available. When you run this command, you’ll need to provide the password you entered in the last step.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol

Next, we’re able to interact with the disk per usual at the /dev/mapper/encrypted_vol location created in the last step. Since the encryption is done transparently, you don’t need to do anything special from here on out to keep your data encrypted and safe. We’ll create a simple journaled Ext4 filesystem, and mount it to /data for the application server to use.

sudo mkfs.ext4 -j /dev/mapper/encrypted_vol
sudo mkdir /data
sudo mount /dev/mapper/encrypted_vol /data

Your disk is ready. You can check that it’s mounted and how much space is available using df -h.

Filesystem                 Size  Used Avail Use% Mounted on
/dev/vda1                   60G   22G   38G  36% /
/dev/mapper/encrypted_vol  118G   18M  112G   1% /data

You can now configure your database data directory, or application user data to point to your new encrypted /data directory to store your sensitive data.

Next Steps

When you are done with the volume, you can close it by unmounting and using luksClose to remove the mapping and deauthenticate the mount point so it cannot be accessed until re-authenticating.

sudo umount /data
sudo luksClose encrypted_vol

And to re-authenticate in the future on this, or any other VM you simply need to use luksOpen with your decryption password and mount to your desired location.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol
sudo mount /dev/mapper/encrypted_vol /data

This should help you get on your way to a more secure installation of your application with securely storing your application data on a high performance SSD volume. At any time, you can use the Volume Live Snapshot function in the dashboard to capture a snapshot of this volume to maintain encrypted backups that you can restore to a new volume at any time.

Encrypted disks are not the only security measure you should be taking into account when deploying your infrastructure, but it is a crucial step in the right direction for every security conscious deployment scenario.

Tutorial: Configure a NIC with multiple public IPs


A few months ago, we showed you how to associate a single public IP to your instance. For most use cases, this functionality does everything that you need. However, one request we find ourselves seeing occasionally is the desire to allocate more than one IP address from the same subnet to an instance, with the driver being that it provides the ability to have multiple public IPs pointing at a particular NIC/port. Currently, this requires some more advanced command line work.

Read more

Volumes 101


We are often asked about whether or not you can attach additional storage onto instances. Occasionally the base disk size just won’t do, or you have special storage requirements. The answer is Volumes. You can think of Volumes like portable hard drives you can add to your server. The data remains on the volume even as you connect it to various servers (one at a time). A common use case is to attach a Volume for consolidated backups for multiple servers.

Volumes are ideal for adding additional storage for existing servers without having to pay for additional compute resources. So you can spin up a small VM and use volumes to supplement your storage requirements.

For example, provisioning a 512MB VM with 10GB of disk and using volumes to make up the next (20GB) tier’s disk of 30GB would only cost an additional $5 (20GB * $0.25). You aren’t charged for incoming traffic or the amount of I/O you do. You are simply charged for the amount of storage you have. This is measured by GBs of provisioned storage and charged at $0.25 / GB / month, or ~$0.0003 / GB / hr. This can enable huge cost savings by only provisioning exactly what you need.


Have a Cloud-A account (don’t have one yet?), and at least one instance spun up. You can follow the start of this post for how to get started spinning up a Windows instance on our platform.

Creating a Volume

To create a volume, visit the Volumes page on your dashboard, and click on the “Create Volume” button.


In the modal window, enter a Volume Name, optional Description, select the Type (Regular) and your desired Size in GB. We’ll just make a small 2GB Volume for this demo. After hitting “Create Volume”, you’ll see a “Creating Volume” message while it is being provisioned.

Once you have your Volume created you need to Attach it to the desired Instance. Click Edit Attachments to open the Manage Volume Attachments window. Under Attachments you will see any Instances the volume is currently attached to. (It should be blank now).


Open the Attach to Instance dropdown and select the instance you wish to connect the volume to, then “Attach Volume”.

Just like that, your Volume is attached to your instance, and available to be formatted & mounted in the OS.

Formatting & Mounting Your Volume

Before you can make use of your volume in your operating system of choice, you’ll need to format the disk and mount it so you can start using it for storage. Below you’ll find some basic instructions for Linux and Windows instances that will get your disk mounted and ready.

Linux Servers

For linux servers (ubuntu 12.04 is used in the example below), it’s a quick process using fdisk, mkfs and mount.

$ ssh ubuntu@<my-instance-ip>
$ sudo -i
$ fdisk /dev/vdb
  # create new partition (n), 
  # primary (1), 
  # answer yes to start and end blocks to use entire disk.
$ mkfs.ext4 /dev/vdb1
$ mkdir /mount
$ mount /dev/vdb1 /mount

If you would like the Volume to be mounted on boot (permanently attached). You can add an entry in your /etc/fstab that looks something like the following.

/dev/vdb1    /mount   ext4    defaults     0        2

Windows Servers

For Windows servers (Windows Server 2008 R2 is used in the example), You can do it all from the console. If you’re still on the Volumes screen, you can click the link to the server, and then click the “Console” tab to get to it quickly.

Once logged in and set up, click on Server Manager – on bottom left next to the Start button. Then click on Storage and then Disk Management.


Right click the Volume you added (labelled Disk 1), and select New Simple Volume. That will launch a Wizard to format the new disk. Following the prompts and using the default values will be fine. Once it finishes formatting (this may take a few moments), you’re done!


If you flip back to your dashboard, you should see that the Status is “In-Use”. Success!

Volume FAQ

  1. What is the max number of volumes that can be connected to any one server? As many as you like, it is bottlenecked by the OS that you’re connecting it to.
  2. What are the smallest / largest size volumes? We have customers with volumes sizes from 1GB through to 10TB.
  3. How are volumes metered? Usage is added up at the end of the month, and calculated based on $0.25/gb/month. Point in time snapshots are taken of usage, and added up for each second of use. i.e. day 1 you’re using 1GB, for the time of that day, you’re charged that much, day 30 you’re using 1TB, only the last day are you charged the 1TB amount for usage.
  4. Are Volumes slow storage? No, Volumes are very fast! – SSD Backed storage, the same stuff that gives you blazing fast IO on your VMs.

Ready to give it a try?

Get started with a $10 credit

Create my account!