CLOUDBERRY EXPLORER POWERED BY CLOUD-A BULK STORAGE

 

Cloudberry Lab is a company that makes backup and file management software for hybrid cloud environments, allowing users to backup or sync files from their local systems to the public cloud. While Cloudberry has paid products for backing up Windows servers and applications, they offer a piece of freeware called Cloudberry Explorer, which is a file manager that allows you to sync files from your Windows system to a number of public cloud options including OpenStack.

 

Create Cloud-A Bulk Volume Container

CloudBerry Explorer for OpenStack is built on OpenStack Swift technology, which means that users can use it with Cloud-A’s Bulk Storage ($0.075 per GB per month). You will need to create at least one Bulk Storage container by navigating to the storage tab in the Cloud-A dashboard and selecting “New Container.” Appropriately name your container and you are ready to download Cloudberry Explorer.

Tip: To keep your cloud-synced files organized, we recommend creating multiple Bulk Storage containers and treat them as if they were a folder directory on your local system.

 

Download Cloudberry

Navigate to http://www.cloudberrylab.com/download-thanks.aspx?prod=cbosfree and download CloudBerry Explorer for OpenStack Storage.

Simply follow the steps to completed the installation wizard program.

Authenticate to your Bulk Storage Container

Once CloudBerry Explorer has launched you will notice that the left side of the screen represents your local systems folder directory and the right represents cloud storage. On the cloud storage side click the source drop down menu and select:
<New Storage Account>

Select Cloud-A

cloudacloudberry

Then enter your specific credentials as follows:

  • Display name: email (Cloud-A login username)
  • User name: email (Cloud-A login username)
  • Api key: Cloud-A password
  • Authentication Service: https://keystone.ca-ns-1.clouda.ca:8443/v2.0/tokens
  • Tenant Name: email (Cloud-A login username)

Now Select “Test Connection” to ensure that the system has accepted your credentials.

If Test Connection fails, ensure that you have entered your credentials correctly. If you have entered your credentials correctly but are still receiving a “Connection Failed” error message, ensure that you have the correct ports open for Bulk Storage. Those ports are: 80, 443, 8443 and 8444.

If your credentials were entered correctly, the Bulk Storage container you created in the first step will appear in the file directory on the right side of the screen. To test the connection, select a test file from your local system, and select “Copy.” A transfer status message will appear briefly at the bottom of the screen and the file will copy from the left side of the screen and appear in your cloud storage container on the right.

To prove this concept, log into your Cloud-A dashboard and navigate to your new Bulk Storage container. You should see your test file.

Functional Use Cases:

  • Upload very large files, like 4K HD videos, disk images, or backup archives, in multiple pieces efficiently and have them downloaded / served as a single file using an Object Manifest to glue the data back together.
  • Archive data from old projects taking up unnecessary space on your production storage (CAD files, BIM files, PSD files.)
  • Use with Cloud-A Windows instances and move infrequently used, non-mission critical data of off high performing SSD volume storage.

Next Steps:

CloudBerry Explorer is a great way to manually sync files to Cloud-A, and a great introduction into hybrid cloud solutions. Check out some of CloudBerry Lab’s other products for more advanced features like scheduled backups and encrypting files.

TAKING LESSONS FROM DEVELOPERS FOR TRUE CLOUD ADOPTION IN THE IT CHANNEL

Untitled drawing (3)
Be it capital intensive in-house solutions powered by VMware and SAN technology or re-selling contracted collocated infrastructure, managed service providers have been offering some version of the “cloud” for the past decade or so.

Meanwhile, in a not-so-different industry, application developers have been leveraging  “new” cloud technology which allows for self serving, on-demand infrastructure, requires no upfront equipment cost, allows you to pay for what you use,  and with no buyer-lock in. This is known as the public cloud – powered by Openstack.

Many of the benefits of an Openstack public cloud that software development companies have been enjoying since the inception of the technology can also be enjoyed by a managed service provider. The disconnect? Many of the features of an Openstack public cloud that create these benefits are features that historically, have only mattered to a developer. The reality is, these features can be extremely valuable to an MSP as well.

Opensource

OpenStack is an opensource project and a community of thousands of developers who contribute to the ongoing growth of the product. While opensource isn’t typically a concept discussed in the IT channel, the concept ensures rapid and ongoing innovation, which in turn allows a MSP to introduce new functionality and features to their own clients.

In a highly competitive industry like the IT channel, differentiating your service from your competitors can make the difference between being competitive and being the leader. Offering your client base innovative, leading edge products and services creates value for your clients and a competitive edge for your business.

API Driven

APIs are what developers use to automate the process of connecting one application to another. Developers use APIs to link functionality of their products to existing products so that their end users don’t have to do it manually. Why would APIs matter to an MSP? Many of the manual, labour intensive processes MSPs would typically perform to manage their client’s infrastructure can be automated with an API driven public cloud.

More and more public cloud friendly applications are coming to market that integrate directly with public clouds through their APIs. Take Cloudberry Lab (www.cloudberrylab.com) as one example. Cloudberry makes products that synchronize and/or backup local systems to Openstack public clouds, among others. This functionality is driven by APIs.

API driven public clouds, and the abundance of available third party applications are enabling MSPs to expand their product and service portfolio, automate laborious processes and create more value for their clients.

Utility Billing

Developers enjoy the benefit of the bill-by-the-minute pricing model of the public cloud for building products, test environments, and other workflows where instances aren’t required to be powered on 24/7.

Utility billing sets MSPs free from hardware staging and allows them to avoid using expensive, production equipment for proof of concept testing and client demonstrations. The economics of the utility model also prevent MSPs from incurring long term colocation or dedicated server contracts, allowing them to add infrastructure as they add clients, and scale back infrastructure when it isn’t needed.

Call to Action

Cloud technology has changed, and it has created an excellent opportunity for MSPs to revolutionize their service delivery model with modern technology, streamlined process, reduced service labour costs, and more attractive economics. Developers have been realizing these benefits for years, and the time is now for MSPs to do the same to gain a competitive edge in their markets.

Installing ownCloud Desktop Client

Installing ownCloud Desktop Client

If you have been following our blog you will know that we have recently published two posts on ownCloud. The first, “Deploying ownCloud on Cloud-A” was a tutorial on how to install and configure ownCloud on a Windows 2008 R2 instance on Cloud-A and the second titled: “ownCloud: Infinite Expandability with Cloud-A’s Bulk Storage” was how to expand your ownCloud deployment with our bulk storage powered by Swift. Today we are going to show you how to install the ownCloud desktop client for OSX and Windows Server 2008 R2 (instructions will be the same for Windows 7.)

 

Download and Install Desktop Client

You will need to download the appropriate ownCloud desktop client from  https://owncloud.org/install/. Once your download has completed, run the installer for the ownCloud desktop client.

Authenticate to your ownCloud Server

Upon completion of the installation you will need to authenticate to your ownCloud server with the correct IP address.

client3

Next, you will need to authenticate with your ownCloud credentials.

client5

Configure Settings

At this point you can choose your folder syncing preferences. Depending on your preference, you can choose from syncing everything from your ownCloud server or just specific files and folders.

client6

Much like Dropbox, ownCloud will create a cloud-syncing local drive on your desktop. An ownCloud folder shortcut will appear in the top menu bar as well as your Favorites under in Finder. In Windows, an ownCloud folder shortcut will appear in the tray as well as your Favorites under in My Computer.

Next Steps

At this point in our ownCloud blog series you have learned how to create an ownCloud server on a Cloud-A Windows instance, expand the storage space with Bulk Storage and configure desktop clients. To take it one step further and enable your users for mobility you can download and configure mobile apps for iOS and Android.

 

USING BULK STORAGE CONTAINER KEYS WITH DUPLICITY

In the spirit of security, and in light of our recent feature release of Bulk Storage Container API Keys — we have forked and released a new version of Duplicity, the most popular Bulk Storage backup utility on Cloud-A.

Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.

Our version features a new clouda:// storage backend that supports the use of our Bulk Storage Container Keys, so you don’t have to embed your credentials with your deployed application doing the backups, and can securely use the generated container full key to do your backups.

Installation

You can download the latest stable version from the GitHub repository archive page, install the necessary dependencies, then run the installer.

pip install python-swiftclient lockfile
wget https://github.com/CloudBrewery/duplicity-swiftkeys/archive/373.tar.gz
tar -zxvf 321.tar.gz
cd clouda-duplicity-xxx/
python setup.py install

Usage

Where historically, you would have to use the SWIFT_ environment variables to store all of your authentication information, the new backend only requires two variables to run securely.

export CLOUDA_STORAGE_URL="https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH<tenant>"
export CLOUDA_CONTAINER_FULL_TOKEN="<full-XXXX-token>"
duplicity [--options] <backup_path> clouda://<container_name>

You can get your tenant_id filled bulk storage URL from the Dashboard under API Access and listed as Object Store, and generate your Full Token from the container list screen under Manage Access.

The new backend supports all advanced duplicity functionality, including full and incremental backups, and GnuGPG encrypted backups. We can’t wait to hear your feedback on this project, and hear about other OpenStack swift third-party tools that you currently use, which we can help offer a Cloud-A secure version of. As always, you can reach our support team at support@clouda.ca, or on twitter at @CDNCloudA.

CLOUD-­A’S GUIDE TO PROTECTING PRIVATE HEALTHCARE INFORMATION IN THE CANADIAN PUBLIC CLOUD

We have been receiving a lot of questions from prospective partners about how we comply with the rules around private information as it pertains to healthcare in Canada. The simple answer is that we provide secure and redundant infrastructure to our healthcare partners and work with them to recommend best practices and procedures for securing their own virtual instances that reside on our public cloud.

While that explanation might seem broad for such an important topic, the laws pertaining to personal information protection in Canada are complicated, technically nonspecific, and just plain hard to grasp.

We have created this “Guide to Protecting Private Healthcare Information in the Canadian Public Cloud” to help inform our partners and clients about how they can use our public cloud and be in compliance with Canadian privacy laws.

Download Cloud­-A’s Guide to Protecting Private Healthcare Information in the Canadian Public Cloud

Cloud-A’s Guide to Protecting Private Healchare Information in the Canadian Public Cloud

We have been receiving a lot of questions from prospective partners about how we comply with the rules around private information as it pertains to healthcare in Canada. The simple answer is that we provide secure and redundant infrastructure to our healthcare partners and work with them to recommend best practices and procedures for securing their own virtual instances that reside on our public cloud.

While that explanation might seem broad for such an important topic, the laws pertaining to personal information protection in Canada are complicated, technically nonspecific, and just plain hard to grasp.

We have created this “Guide to Protecting Private Healthcare Information in the Canadian Public Cloud” to help inform our partners and clients about how they can use our public cloud and be in compliance with Canadian privacy laws.

Download Cloud­-A’s Guide to Protecting Private Healthcare Information in the Canadian Public Cloud

Encrypted Volumes: Linux Edition

vpn-rev-encryption
Having your data encrypted at rest is crucial for a secure application, especially when you are reporting to a governing body for IT security standards in the Healthcare or Financial markets. Our VPC networking ensures all private network traffic is fully encrypted per customer network, which ensures your data is encrypted over the wire between VMs. Having encrypted volumes will add an extra layer of security to ensure even a somehow compromised volume is rendered unreadable.

We’re going to review encrypting your Cloud-A SSD volume on both a Redhat based, and Debian based Linux distribution using the dm-crypt package — the gold standard for modern Linux disk encryption. dm-crypt is a kernel level transparent disk encryption system, only worrying about block level encryption without ever interpreting data itself. This gives dm-crypt the availability to encrypt any block device, from root disks and attached volumes, to even swap space.

Create your SSD Volume

Head to your Cloud-A dashboard, and create a new Volume under “Servers -> Volumes”. We’ll create a 120GB volume named “Encrypted Disk 1?.

Screenshot - 11172014 - 02:10:52 PM

Now that we have the drive in place, we’ll attach it to a server to configure the disk encryption.

Screenshot - 11172014 - 02:11:51 PM
Screenshot - 11172014 - 02:14:27 PM

Linux Setup: Ubuntu 14.04 & Fedora 20

Our Ubuntu and Fedora images ship with the necessary tools to do encryption out of the box, we’re going to use the cryptsetup package to create the encrypted block device, create a password, and make it available to mount via the mapper.

sudo cryptsetup -y luksFormat /dev/vdb

Cryptsetup will prompt you to overwrite the block device, and be irreparable. Type “YES” in all caps to confirm. You’ll next be asked to provide a password for decrypting your data, make sure you choose a strong password and store it somewhere safe.

If you lose your password, you will in effect have lost all of your data.

Now we can use cryptsetup luksOpen to open the encrypted disk, and have the device mapper make it available. When you run this command, you’ll need to provide the password you entered in the last step.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol

Next, we’re able to interact with the disk per usual at the /dev/mapper/encrypted_vol location created in the last step. Since the encryption is done transparently, you don’t need to do anything special from here on out to keep your data encrypted and safe. We’ll create a simple journaled Ext4 filesystem, and mount it to /data for the application server to use.

sudo mkfs.ext4 -j /dev/mapper/encrypted_vol
sudo mkdir /data
sudo mount /dev/mapper/encrypted_vol /data

Your disk is ready. You can check that it’s mounted and how much space is available using df -h.

Filesystem                 Size  Used Avail Use% Mounted on
/dev/vda1                   60G   22G   38G  36% /
/dev/mapper/encrypted_vol  118G   18M  112G   1% /data

You can now configure your database data directory, or application user data to point to your new encrypted /data directory to store your sensitive data.

Next Steps

When you are done with the volume, you can close it by unmounting and using luksClose to remove the mapping and deauthenticate the mount point so it cannot be accessed until re-authenticating.

sudo umount /data
sudo luksClose encrypted_vol

And to re-authenticate in the future on this, or any other VM you simply need to use luksOpen with your decryption password and mount to your desired location.

sudo cryptsetup luksOpen /dev/vdb encrypted_vol
sudo mount /dev/mapper/encrypted_vol /data

This should help you get on your way to a more secure installation of your application with securely storing your application data on a high performance SSD volume. At any time, you can use the Volume Live Snapshot function in the dashboard to capture a snapshot of this volume to maintain encrypted backups that you can restore to a new volume at any time.

Encrypted disks are not the only security measure you should be taking into account when deploying your infrastructure, but it is a crucial step in the right direction for every security conscious deployment scenario.

BACKUP & RECOVERY WITH VOLUME SNAPSHOTS

Today’s post is not to be confused with our post from back in May titled: “Backups With Snapshots” in fact, think of this as an extension of it. Snapshots provide you with a point in time image of your server which gives you system redundancy as you can easily and quickly spin up a new instance based on any of your saved snapshots.  With all of that said, there are a few things to consider when using snapshots of your server as your sole backup process.

Downtime

The server that is being snapshotted will be paused temporarily during the snapshot process. While this pause time can be minimal, it might not be ideal for a server providing mission critical services. Because a snapshot of an server instance includes the whole system (operating system and data) the process can take between 1 and 10 minutes to complete depending on the total consumed disk space of the server instance before your server is resumed.

Cost

There is a cost associated with storing server snapshots. Server snapshots cost $0.15 per GB per month, billed for as long as the snapshot exists. You will only be charged for the compressed size of your snapshot — not the provisioned disk size.

Solution: Volume Snapshots

If the downtime and cost of server snapshots is not ideal for your application, the answer might just be using volumes and volume snapshots.

With this method we recommend that users keep their operating system on the original disk space that is included with the instance and use volumes to store their data. This allows you to take snapshots of your volume for backups of critical data, rather than the entire instance, and avoid the downtime associated with server snapshots.

In the case of a server instance requiring restoration, your recovery is as easy as deleting the server instance, launching a new server instance, and mounting the last successful volume snapshot to it.

Another effective use case is when data is accidentally deleted by a user. You have the ability to mount a previous volume snapshot to a temporary server instance, recover the deleted data and migrate it back to the production server.

If you wanted to go a step further, you can continue to store a single snapshot of your standard image, so in a case of a server issue, you will be able to launch a new instance based on your image with all of your system preferences and server roles intact.

The nicest thing about volume snapshots? They are free! Cloud-A does not charge for the storage of any volume snapshots, making them a cost effective backup solution.

Snapshot Schedule

Since there is no cost to storing volume snapshots with Cloud-A, you can store as many old volume snapshots as you would like. As the snapshots begin to pile up, properly labeling them will become increasingly important so that you know which snapshot is which.

The frequency of how often you snapshot your volumes is dependant on your organization’s tolerance for data loss and/or downtime. An organization with zero tolerance for downtime might require a daily snapshot of their server volume to provide them with several point in time instances of that volume. A less mission critical server volume, like a test environment, or an organization with a greater tolerance for downtime, may only require weekly snapshots.

Next Steps

We have partners and customers who are using a number of different backup methods with our infrastructure today. This really speaks to the flexibility Cloud-A’s Infrastructure-as-a-service offering. At the end of the day, the best backup processes are the ones that are recoverable and volume snapshots provide you with a cost effective, recoverable backup solution. We urge you to test it out for yourself!

ownCloud: Infinite Expandability with Cloud-A’s Bulk Storage

dt-09-final-infinity

We previously published a blog post on creating an ownCloud server on Cloud-A’s public cloud, but we would like to build upon that and show just how expandable and agile a Cloud-A hosted ownCloud deployment can be by introducing bulk storage.

By leveraging our Bulk Storage powered by Swift, users can expand the size of their ownCloud deployment very quickly and inexpensively to facilitate growth. Unlike a hardware deployment, where you would purchase drive space up front to account for future growth, a Cloud-A deployment will allow an organization to scale their storage as needed on a pay-as-you-go utility model.

Getting Started

We will begin with the assumption that you already have an ownCloud deployment running on Cloud-A with administrator access to the program.

Create an Object Storage Container

From your Cloud-A dashboard, select “Storage” and then “Containers.” Select “New Container,” and name the new container.

Configure External Storage in ownCloud

ownCloud comes prepackaged with external storage support, but the functionality must be enabled in the “apps” dashboard of your ownCloud instance.  In the “apps” dashboard select “External storage support” on the left-hand side bar and enable it.

This will populate an External Storage section in your ownCloud Admin panel. Select “OpenStack Object Storage” from the “External Storage” dropdown menu and fill enter the following credentials:

Folder Name: Name your storage mount point.

User: Your Cloud-A username (your email address)

Bucket : The name of your Cloud-A container

Region: “regionOne”

Key: Your Cloud-A username

Tenant: your email address

Password: Your Cloud-A password

Service_name: “swift”

URL: https://keystone.ca-ns-1.clouda.ca:8443/v2.0

Timeout: Timeout of HTTP requests in seconds (optional)

If you have correctly input the information above and ownCloud accepts that information, a green dot will appear to the left of the folder name.

Validate External Storage

To further validate the access to the new external storage, go back to the main ownCloud screen by clicking the ownCloud logo in the top left corner, and select external storage. You should see your newly created ownCloud folder which points to your Cloud-A object storage powered by Swift.

Next Steps

Adding additional external object storage to your Cloud-A hosted ownCloud instance sets you free from the traditional limitations of hardware, allowing you to scale on demand. This is an ideal solution for any growing company looking to have control of their own data, but also have that data stored securely in Canada.

Stay tuned for the next post in our ownCloud series.