BULK STORAGE: CONTAINER API KEYS

We’re excited to announce the public availability of Container-specific API keys for our Bulk Storage service. Our team has developed and deployed this feature, which has been reviewed by OpenStack Swift core developers, to meet the specific application requirements for our customers. This will enable you to develop and deploy more secure applications on Cloud-A infrastructure by shipping with secure, revokable authentication keys.

Technical Overview

This feature will allow developers who leverage our Bulk Storage APIs to deploy more secure applications using access keys specific to the container(s) that each application needs to use. There is no longer a need to embed your Cloud-A authentication information when you deploy. Instead, you can generate secure keys on a per-container basis through the Dashboard with either read-only, or full read & write access — in case you need to give access to a third party to perform read operations on any object in a container.

Additionally, we are contributing the Swift middleware that we’ve developed back to the community! The source is available on our Bitbucket account, along with install instructions for enabling the middleware in your swift-proxy server.

Generating Secure Keys

We have extended our Dashboard to allow you to generate, set and revoke secure keys for all of your Bulk Storage containers from the main container screen. In the list of actions, you’ll now see a “Manage Access” action, which will display the currently-set API keys for the container, and an option to regenerate them. For additional security, the default behaviour is such that containers do not have any access keys set and you’ll need to generate initial keys to enable this feature. For your convenience, dashboard generated keys are prefixed with the permission and first four letters of your container name.

The regeneration function serves to revoke your current container credentials, and generate new secure keys. This will help squelch the threat of any potentially leaked credentials by immediately rejecting all requests using the old keys, and certainly not having to worry about leaked account passwords per the OpenStack Swift default access requirements.

Testing

Now that we have our keys, we can test that they work by using simple curl commands. We will test listing files in a container and downloading a test file by passing our new API key in the headers of the request. Note: It is important you use the Bulk Storage https endpoint if you are on an external network to ensure your headers cannot be read by a third party.

List a container

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test
...
< HTTP/1.1 200 OK
< X-Container-Object-Count: 1
< 
    index.html

Download a file

Downloading the index file is just as quick.

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test/index.html
...
< HTTP/1.1 200 OK
< Content-Length: 44
< Content-Type: text/html
<
<html>
<body>
<h1>test</h1>
</body>
</html>

Key revocation

And when we revoke the key in the dashboard, attempting the request again will return a 401 Unauthorizederror as expected.

$ curl -v -H 'X-Container-Meta-Read-Key:read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a' https://swift.ca-ns-1.clouda.ca:8443/v1/AUTH_(tenant_id)/test/index.html
...
< HTTP/1.1 401 Unauthorized
< 
401 Unauthorized: Auth Key invalid

Using Python-SwiftClient

In a deployment scenario, it’s likely that you won’t be using curl to fetch or upload objects. Thepython-swiftclient library is the official OpenStack library used to interact with Swift deployments, including our Bulk Storage service. These examples are using a Python 2.7 REPL.

Download a file

We’ll start by downloading the file we’ve been playing with via curl above using the read-only key, which has full read access to the container, but cannot perform any POST, PUT or DELETE requests. Notice that the second argument to get_object, which is normally the auth token is set to None, as this shared key mechanism is separate from the Keystone authentication backend.

>>> import swiftclient
>>> read_key ='read-TEST-dff8555a-8c4d-4541-a629-3b6e7029803a'
>>> response = swiftclient.get_object(
    'https://swift.ca-ns-1.clouda.ca:8443/v2.0/AUTH_(tenant_id)', 
    None, 
    'test', 
    'index.html', 
    headers={
        'X-Container-Meta-Read-Key': read_key
    })
>>> response[1]
'<html>\n<body>\n<h1>test</h1>\n</body>\n</html>\n'

Upload a file

Uploading a file using the full-key is just as easy, in this example we’ll upload a text file to Bulk Storage and read it back out again using the python-swiftclient library.

>>> import swiftclient
>>> full_key = 'full-TEST-1e1c1fca-16ce-4aba-b89c-3c8b7911d1c4'
>>> swiftclient.put_object(
    'https://swift.ca-ns-1.clouda.ca:8443/v2.0/AUTH_(tenant_id)', 
    container='test', 
    name='another_file.txt', 
    contents='this is a test file', 
    headers={'X-Container-Meta-Full-Key': full_key})
>>> response = swiftclient.get_object(
    'https://swift.ca-ns-1.clouda.ca:8443/v2.0/AUTH_(tenant_id)', 
    None,
    'test', 
    'another_file.txt', 
    headers={
        'X-Container-Meta-Full-Key': full_key
    })
>>> response[1]
'this is a test file'

If you have any questions about how to implement these changes into your application, please don’t hesitate to reach out to our support team. This will be our recommended approach for connecting to Bulk Storage from your application, so we want to ensure you’re feeling comfortable with the new features and using them.

Going Forward

We look forward to seeing all of the new integrations this feature enables for our clients. We’re continuously working and deploying new features to give you a competitive advantage when it comes to the automation of your infrastructure. Our team is committed to building our platform to meet your needs, and contributing the appropriate parts back to the OpenStack community whenever we can. Using OpenStack as a framework has helped us launch the most robust public cloud in Canada, and we’re not stopping there!

See you on the other side!

Recent Performance Gains As A Result Of Our New SSD Platform

Two weeks ago we announced our new ultra-fast SSD architecture and we’ve been thrilled with all of the great comments, feedback, and praise. We are always happy to hear about how excited our customers are when they try out a high performance cloud server for the first time, or having their existing servers operate more quickly. We re-ran The Decapo benchmarks that we published a few months ago, and here’s how it compared directly to our previous benchmark results:

chart_3-5

On average we saw a 6.15% performance increase across the board. We then factored those numbers back into the normalized compute value comparison to put into a rationalized cost perspective as seen below. You can see that the new SSD architecture adds even more bang for your buck compared even to our last tests.

We’re very supportive of what Serverbear.com is doing with having crowd sourced independent and standardized test results. This style of benchmarking seems to be getting a lot of momentum, so we’ve compiled some data we found there after running their test suite on one of our 8GB-HC instance and have noticed what seems like an interesting and counter intuitive trend between IOPS and cost between cost competitive VM sizes

cchat_1-10
rawcad

Source data from Serverbear.com comparison view after running the benchmark on one of our VMs, you can check the comparison tab to see how we stack up.

Screenshot_2015-05-12_11-23-17

Our Commitment To You (Our Clients) On Data Privacy

It was a fortunate coincidence that not too long after we started up Cloud-A, Snowden came out with all the information about how data privacy was at risk, more than almost anyone would have thought possible at the time. Shortly after that it became clear that Canada is not without its challenges in this area as well. We see this kind of interference clearly as abuse, that will hopefully one day be corrected and seen historically as a very unfortunate oversight that as of today is in great need of being rectified.

comeback6

Our values are simple and very straight forward: everyone should have the right to privacy unless they are doing something that is illegal. In that case the legal process dictates that the authorities who are conducting an official investigation have the right to collect data for the purposes of that investigation that is bound and defined by way of a warrant. In our view, to provide anyone any of our clients’ data without a warrant or their consent would be not only unethical but would also be a violation of our clients’ fundamental right to privacy.

In addition, we are committed to telling our customers about all government data requests. We promise to tell users when the government seeks their data unless prohibited by law. The idea being that this gives users a chance to defend themselves against overreaching government demands for their data.

To be fair, we have not had very many requests for this kind of thing so far. We think that’s a good thing, it tells us that our users are by in large not doing anything that would give anyone cause for concern in this way, and it also tells us that the authorities are not as overly ambitious as compared to what it sounds like happens in other regions. That being said, we know that with our continued growth it’s just a matter of time before this kind of thing will be more relevant for us and our clients in the future. As a result, we thought it would be appropriate to make a comment about our intentions moving forward.

In addition to protecting our clients’ privacy to the extent of our legal abilities, we are committed to publishing statistics on how often we provide user data to the government in general terms.

Tutorial: Configure a NIC with multiple public IPs

Neo-Bullet-Stop

A few months ago, we showed you how to associate a single public IP to your instance. For most use cases, this functionality does everything that you need. However, one request we find ourselves seeing occasionally is the desire to allocate more than one IP address from the same subnet to an instance, with the driver being that it provides the ability to have multiple public IPs pointing at a particular NIC/port. Currently, this requires some more advanced command line work.

Read more

Volumes 101

various-hard-drives

We are often asked about whether or not you can attach additional storage onto instances. Occasionally the base disk size just won’t do, or you have special storage requirements. The answer is Volumes. You can think of Volumes like portable hard drives you can add to your server. The data remains on the volume even as you connect it to various servers (one at a time). A common use case is to attach a Volume for consolidated backups for multiple servers.

Volumes are ideal for adding additional storage for existing servers without having to pay for additional compute resources. So you can spin up a small VM and use volumes to supplement your storage requirements.

For example, provisioning a 512MB VM with 10GB of disk and using volumes to make up the next (20GB) tier’s disk of 30GB would only cost an additional $5 (20GB * $0.25). You aren’t charged for incoming traffic or the amount of I/O you do. You are simply charged for the amount of storage you have. This is measured by GBs of provisioned storage and charged at $0.25 / GB / month, or ~$0.0003 / GB / hr. This can enable huge cost savings by only provisioning exactly what you need.

Prerequisites

Have a Cloud-A account (don’t have one yet?), and at least one instance spun up. You can follow the start of this post for how to get started spinning up a Windows instance on our platform.

Creating a Volume

To create a volume, visit the Volumes page on your dashboard, and click on the “Create Volume” button.

Screenshot_2014-05-15_16_31_07

In the modal window, enter a Volume Name, optional Description, select the Type (Regular) and your desired Size in GB. We’ll just make a small 2GB Volume for this demo. After hitting “Create Volume”, you’ll see a “Creating Volume” message while it is being provisioned.

Once you have your Volume created you need to Attach it to the desired Instance. Click Edit Attachments to open the Manage Volume Attachments window. Under Attachments you will see any Instances the volume is currently attached to. (It should be blank now).

Screenshot_2014-05-15_16_34_23

Open the Attach to Instance dropdown and select the instance you wish to connect the volume to, then “Attach Volume”.

Just like that, your Volume is attached to your instance, and available to be formatted & mounted in the OS.

Formatting & Mounting Your Volume

Before you can make use of your volume in your operating system of choice, you’ll need to format the disk and mount it so you can start using it for storage. Below you’ll find some basic instructions for Linux and Windows instances that will get your disk mounted and ready.

Linux Servers

For linux servers (ubuntu 12.04 is used in the example below), it’s a quick process using fdisk, mkfs and mount.

$ ssh ubuntu@<my-instance-ip>
$ sudo -i
$ fdisk /dev/vdb
  # create new partition (n), 
  # primary (1), 
  # answer yes to start and end blocks to use entire disk.
$ mkfs.ext4 /dev/vdb1
$ mkdir /mount
$ mount /dev/vdb1 /mount

If you would like the Volume to be mounted on boot (permanently attached). You can add an entry in your /etc/fstab that looks something like the following.

/dev/vdb1    /mount   ext4    defaults     0        2

Windows Servers

For Windows servers (Windows Server 2008 R2 is used in the example), You can do it all from the console. If you’re still on the Volumes screen, you can click the link to the server, and then click the “Console” tab to get to it quickly.

Once logged in and set up, click on Server Manager – on bottom left next to the Start button. Then click on Storage and then Disk Management.

Screenshot_2014-05-15_16_42_38

Right click the Volume you added (labelled Disk 1), and select New Simple Volume. That will launch a Wizard to format the new disk. Following the prompts and using the default values will be fine. Once it finishes formatting (this may take a few moments), you’re done!

Screenshot_2014-05-15_16_45_35

If you flip back to your dashboard, you should see that the Status is “In-Use”. Success!

Volume FAQ

  1. What is the max number of volumes that can be connected to any one server? As many as you like, it is bottlenecked by the OS that you’re connecting it to.
  2. What are the smallest / largest size volumes? We have customers with volumes sizes from 1GB through to 10TB.
  3. How are volumes metered? Usage is added up at the end of the month, and calculated based on $0.25/gb/month. Point in time snapshots are taken of usage, and added up for each second of use. i.e. day 1 you’re using 1GB, for the time of that day, you’re charged that much, day 30 you’re using 1TB, only the last day are you charged the 1TB amount for usage.
  4. Are Volumes slow storage? No, Volumes are very fast! – SSD Backed storage, the same stuff that gives you blazing fast IO on your VMs.

Ready to give it a try?

Get started with a $10 credit

Create my account!

Almost Unbelievable Performance Results!

Last week we were very excited when we saw this article on Java World came out:

Ultimate cloud speed tests: Amazon vs. Google vs. Windows Azure

It was great to see them going a deeper into the question of how the big cloud providers in the US are performing with standardized Java benchmarking tests. So of course we wanted to know how we stacked up using the exact same process and methods. We ran the same tests on Cloud A’s new 4GB, and 8GB High Compute instances and and here’s what we found:

chart_2-4

Here’s the raw data from the benchmarks that were used to build the graphs, you can see the exact scores for each test on each provider & see that we’re using the exact same calculations to determine our cost numbers.

Screen Shot 2014-03-13 at 12.55.42 PM

 

From the same dataset, we also graphed the total cost per run of the test. Not only does the test complete in less time on our instances, each compute instance costs less to run. The end result is a staggering depiction of cost savings.

chart_3-5

So, to be blunt, we even shocked ourselves! Our biggest fear at this point is that the world won’t believe us. If that’s the case let us know! We’ll give you free account to run the tests for yourself or introduce you to some of our early adopters who didn’t believe it before they tried it out for themselves either.

Press Release – Cloud A Becomes Hortonworks Infrastructure Partner

HWX_Badges_Cert_HDPHalifax, Nova Scotia, Canada – Feb 24, 2014 – Cloud A, a leader in open cloud infrastructure solutions for Big Data, announced today that it has forged a partnership with Hortonworks a leading commercial vendor promoting the innovation, development and support of Apache Hadoop. The agreement will enable Cloud A to enable the acceleration and adoption of Apache Hadoop by organizations who wish to have their data reside in Canadian data centers.

Enterprises everywhere are racing to deploy Apache Hadoop to enhance their data management capabilities. Cloud A in conjunction with it’s partner community are developing solutions that will make it easier for enterprise customers to deploy and manage Hadoop by leveraging the leading technologies from both vendors to create the first enterprise Hadoop solution that handles everything from deployment on bare metal all the way to the application.

Cloud A will combine the Hortonworks Data Platform with Cloud A’s Openstack Canadian open cloud to create an integrated solution that addresses the problems enterprise data centers experience when developing Hadoop solutions internally. The Hortonworks Data Platform powered by Apache Hadoop, is a massively scalable, 100 percent open source platform for storing, processing and analyzing large volumes of data. The platform includes the most popular and essential Apache Hadoop projects including HDFS, MapReduce, HCatalog, Pig, Hive, HBase and Zookeeper. These combined solutions will make it easier to bring a Hadoop project from proof-of-concept, to production, to scale-out.

“We are very excited to be able to offer a Canadian Openstack powered cloud to deploy Hadoop solutions for enterprise customers that addresses the often overlooked problem of setting up and managing the underlying cluster for Hadoop,” said Brandon Kolybaba CEO of Cloud A. “Hortonworks thought leadership in the space is exceptional and we are very excited to work with them to leverage the advanced technologies, and support infrastructure for Apache Hadoop.”

About Cloud A
Cloud A is a leading provider of Cloud Infrastructure powered by Openstack based in Canada. Our product simplifies the installation and management of the hardware and software that provides the infrastructure for large scale environments having hundreds or thousands of servers supporting Big Data, Analytics, or High Performance Computing. For more information visit www.CloudA.ca

Contact:
Jacob Godin
Cloud A (CloudA.ca)
902-442-4064
Jacob.Godin@CloudA.ca

How a Cloud A client saved over 265% on their monthly IAAS costs with our hourly billing model

One of the questions we often get at Cloud A is “How does billing by the hour work?”. The answer is simple. We bill by the hour instead of by the month; compared to the more traditional IAAS provider model. In some cases by the hour doesn’t matter since most instances are up and running 24/7, but in the case where environments need to be available for testing, or ad hoc / batch processing purposes, it’s extremely compelling.

One of our beta testers is a company called Maintenance Group Inc.. They are an IBM partner focused on the Asset Management / Equipment Maintenance space working with a software product called Maximo / Tivoli. Before moving to Cloud A they had dedicated environments setup for testing new installations, configuration changes to those environments and doing other intensive tasks like batch processing jobs.

As with traditional VM solutions, snapshots play a big role in how they deliver solutions to their end clients. They use snapshotting a lot to streamline the deployment process and once an environment is setup and working well they then use it as a failsafe mechanism to ensure that they always have a current rollback position. In the past, they needed to pay to have those environments active on the system. Typically were are always left on even when they were not being used. When Maintenance Group Inc. learned that with Cloud A you only pay a tiny fraction of the cost to have a snapshot image stored while not active ($0.25/GB/month), they decided to take advantage of the opportunity to file those old & seldom used environments away and only spin them up when they were needed. The result was a dramatic cost savings of over 265% compared to the previous quarter’s spend on hosted infrastructure. In addition they have found other ways to leverage Cloud A’s on demand compute power to do batch processing of large datasets.

“We’ve been very impressed with what we’ve seen at Cloud A so far. Many of our clients require their data to reside in Canada and that has prevented us from leveraging the large US based providers who have billing by the hour solutions like this to date. This is a great solution for them as well as for our clients in the Caribbean & the US regions. It’s great to have a Canadian solution like this” – Jeff Seaward,  Maintenance Group Inc.

Your Cloud A, however you need it.

Flava

New Flavours!

We’re very excited to announce our new instance flavours. We know as well as anyone that a general purpose compute node is great, but once you start getting into larger instance sizes you have a better idea of the performance characteristics and demands of your server. After hitting the drawing board and crunching some numbers, we’ve put together two new flavour types: High Memory, and High Compute. These flavours are available on instance sizes greater than 4GB and you can start using them today from the instance launch screen.

Cost and Value Modelling

We spent a lot of time putting these new flavours together, to offer the best value for everyone no matter your needs. We will be updating our pricing page shortly to include the new flavour types. Once of the biggest challenges to introducing these new options was ensuring that we can present it in an un-intimidating and easy to understand manner. In the spirit of transparency and making sure you get the best value for your computing needs we took the time to model out the value proposition for three different types of demand: General Purpose Computing, High Memory, and High Compute. Graphing the relationship between the cost, and perceived value for each type of customer shows how these new flavour types are extremely valuable.

General Purpose

The General Purpose (GP) flavour type is for a user who requires a balanced, and powerful server. A user who values CPU, Memory, and Disk space equally. The High Memory and High Compute flavour types are less attractive in the general purpose case because they don’t provide equal increases across the board. You can see that the GP flavours really stand out.

General Purpose Value

High Memory

The High Memory (HM) flavour type is for a user who has more demand for memory than disk or CPU. Compared to the General Purpose version of the same flavour, HM instances have 60GB of disk space, and double the amount of memory. For memory heavy applications, the GP flavour type doesn’t make as much sense for you because you’re provisioned more CPU and Disk than you require and your values are not the same as a General Purpose user. Let’s view the same graph as a user who values memory over all else, and you can quickly see how the High Memory flavour is attractive for this user.

High Memory

High Compute

The High Compute (HC) flavour type is meant for users with heavy CPU requirements over memory and disk. Compared to the General Purpose version of the same flavour, HC instances have 60GB of disk space and twice the amount of CPU cores available. Let’s now view the same graph weighted for a user who values CPU over all else. Again, you can see how the High Compute flavour is very attractive over the General Purpose of the next size up.

High Compute

Summary

New Flavours Modal

These new flavour types will allow unparalleled cost savings and customization. Allowing you to build exactly the cloud that you need, without wasting money on resources you don’t need, all metered by the minute and running on our blazing fast hardware. If you already are using a General Purpose flavour and you would like to switch to a High Memory or High Compute flavour: simply save a snapshot of your current instance, and create a new instance using the new flavour restoring from that snapshot (assuming that it fits on disk).

We would love to hear your feedback on how you’re making use of the new flavour types to build your infrastructure, and you can get started immediately in your Dashboard. You will see the new flavours split up by type, with the full detailed breakdown in the sidebar.

How to setup simple Virtual Desktop Infrastructure (VDI) with Cloud A & Chromebooks

These days many organizations are looking to leverage VDI solutions to maximize the value of the cloud.  Here are a few of the key benefits to this approach:

Screen Shot 2014-02-11 at 9.10.39 PM

  • Dramatic cost savings: Because the endpoint is essentially not much more than a very thin client that resembles a dummy terminal from years gone by you can go with cost effective low horsepower devices. Entry level Chromebooks are under $200 and they work great for this purpose because they are durable,  replaceable, lightweight, and don’t have moving parts (SSD instead of HD).  They can also run for over eight hours on a single charge.
  • Increased Security: You can’t really install software on a Chromebook so there is no risk of malware such as viruses, trojans, or key-loggers. If you use third-party vendors, contractors, consultants, you can use locked down Windows environments and still allow them to work immersed in your environment the way you require.
  • No More Desktop Support: This approach allows you to have complete central management of all your desktops and as a result, control and monitor everything installed and used on all the images.
  • Corporate Data never leaves the Network: Mobile workers benefit as sensitive data is stored on the server in the data center and not the device. If the device is lost or stolen, the information isn’t on the device so it’s not at risk.
  • OS migrations: Imagine how simple it will be the next time you need to roll out the latest Windows OS. Not only do you not need physical access to the end users hardware ever again, there is also no need to upgrade hardware to meet memory or disk space requirements. With VDI, you can just push out a Windows image and you are done.
  • Scaling up and down on Demand: You can create a series of VDI snapshots to meet all of your company needs. If your company is seasonal, you can have extra images to handle the increased employee traffic when you need them. When you are don’t need them, you just turn them off.
  • Snapshot Technology: With VDI, you have the ability to roll back desktops to different states. This is a great feature, and it allows you to give a lot of flexibility to your end users.
  • It’s Environmentally Friendly: A thin client VDI session will use less electricity than a desktop computer. Using VDI is a way to reduce your carbon footprint on our planet and save your company money in power costs.
  • It’s BYOD on Steroids: With VDI, you never have to worry about hardware ever again. No matter if it’s a Chromebook, a notebook, an old PC, Macbook, or Linux, etc. As long as you can connected to the Internet your users connect right through to the corporate Windows environment, networks, and data they need.

Getting Started: 

Depending on the size and complexity of your organization, there are an infinite number of ways to design your VDI network.  For this post we are going to use a very simple architecture for illustration purposes.  For more complex environments you may want to consider the value that products like Citrix or the services of an expert cloud solution specialist can bring to the table.

Here you can see via the Cloud A network topology view that we’ve setup two groups of servers on separate networks each with their own router.

Screen Shot 2014-02-11 at 8.53.00 PM

The Default network is the one we have setup for our VDI instances (we have 3 but you can have as many as you like).  If you want to know how to setup this type of infrastructure in more detail please have a look at the blog post we did on that here.

Once your have setup your cloud infrastructure, you can then connect to it via some form of Remote Desktop solution.  In order to do so, you will want to ensure you have properly configured your instances to have external IP addresses and have updated your security profile to allow RDP connections so that you will be able to connect to your private VDI cloud.

Configure the Chromebook

At this point connecting to your VDI cloud to a Chromebook is as simple as installing a Chrome extension, such as the one that you can find here:

Screen Shot 2014-02-11 at 10.29.32 PM

When you launch the App you will see a login prompt that has all the same fields as the native RDP client does:

Screen Shot 2014-02-11 at 10.33.13 PMScreen Shot 2014-02-11 at 10.34.00 PMc

Congratulations!  You are now using Windows on A Chromebook via your own private VDI Cloud.

Screen Shot 2014-02-11 at 10.06.13 PM