We have been seeing increasing demand for our object storage product, Bulk Storage over the past few months, which is exciting for us because we think that object storage is a phenomenal storage solution for many specific applications. We wanted to provide our followers with an overview of object storage technology and highlight some effective use cases. The biggest hurdle we see to adoption of our Bulk Storage product is an understanding of how exactly the technology can be used. As a result of this, a big portion of our job at Cloud-A is to play matchmaker and point users in the direction of existing third party software solutions that integrate with object storage.
As some of our followers might know, the Cloud-A team spent last week in Vancouver for the OpenStack Summit. It was our first summit, and it was nice for it to be in our home country. The summit provided us with the opportunity to interact first hand with our community, listen to keynotes about the direction of our technology, input from industry leaders, and vendors about new and emerging technology in the OpenStack ecosystem. All in all, there was some very familiar messaging about how OpenStack has grown past infancy and is enterprise ready – something that we at Cloud-A have not doubted. To be perfectly honest we don’t feel that there was much announced that was in the category of groundbreaking, but still we wanted to highlight some of the themes and news from last weeks OpenStack summit in terms of future value for our customers.
The Cloud-A team attended the OpenStack Summit track on Cloud Economics and Strategy with Jeff Dickey and Mark Williams of Redapt, Dan Spurling of Getty Images, Sebastian Stadil of Scalr and Sheng Liang of Rancher. The panel discussed key economic and strategic lessons learned through their experience managing public cloud infrastructure as well as private cloud infrastructure. Pricing trends of public cloud vendors, labour costs of private clouds were discussed and compared.
In the end, the panel agreed unanimously that in most cases, public cloud has a more attractive ROI when compared to deploying a private cloud. With that said, the group identified a few scenarios when private cloud may be more attractive.
When performance is mission critical
Stadil, of Scalr, mentioned NASA’s cloud workflows, processing petabytes of data coming down from space every day. The bandwidth required to handle this data is not available from public clouds and therefore NASA requires private cloud infrastructure.
Laws and Regulations
Germany’s strict data residency and privacy laws were discussed as well as the gaming (casino) industry. Both scenarios require absolute control over data which leaves private cloud as the only options in some cases.
Wanting to Own It
The notion that some clients just want to own the data, and the underlying cloud platform was mentioned. This is obviously something that not really anyone can argue with, and isn’t financially driven.
Control as a Decision Making Factor?
Control was a factor that was brought up by a member of the audience. The audience member stated that some organizations would not want mission critical workflows in the public cloud because they would have to rely on a third party to keep their business running. The panelists disagreed with this fundamentally, with Mark Williams stating that “Outages are a reality, whether public or private cloud.” He went on to say that in most cases outages are caused by humans, which is a factor in both a public cloud and private cloud. Dan Spurling backed this up by saying that when it comes to trusting his internal cloud team vs. a public cloud team , he’ll trust the public cloud team every time, purely for the experience and scale they have.
I personally agree with the panel on this topic, as we are seeing this becoming less and less of an issue with clients at Cloud-A. It is our reputation with superior uptime, and hands on support that allows organizations to trust us with hosting the applications that run their business.
This was a very valuable track session, with many of the “cloud debate” questions that we hear about everyday discussed in real time, by people with real experience. In the end, scenarios and cases were made for both private and public cloud. If cost is the only thing considered, public cloud wins almost every time, but cost is not always the only thing to consider when deciding on your organization’s cloud strategy.
As Chief Innovation Officer, Jeff oversees Cloud Strategy & Architecture at Redapt, he works with IT executives across the country as they solve current and future business problems with solutions that will stand the test of time. Jeff has worked in IT and Operations management consulting for over 15 years, helping companies across all verticals save millions by investing in technology and IT practices that that will meet the needs of both… Read More ?
Sheng is a co-founder and CEO of Rancher Labs. Prior to starting Rancher, Sheng was CTO of the Cloud Platforms group at Citrix Systems after their acquisition of Cloud.com, where he was co-founder and CEO. Sheng has more than 15 years of experience building innovative technology. He was a co-founder at Teros, which was acquired by Citrix in 2005 and led large engineering teams at SEVEN Networks, and Openwave Systems. Sheng started his… Read More ?
Proven executive with over 16 years of experience leading technology groups and cultivating world-class performance with an enterprise outlook. Extensive track record building and developing teams of high-talent leaders and high-output individuals located both on and off-shore. Holds an MBA from the University of Washington.
Sebastian Stadil has been a Cloud developer since 2004, starting with web services for e-commerce and then for computational resources. He founded the Silicon Valley Cloud Computing Group, a user group of over 8000 members that meets monthly to present the latest developments in the industry. | As if that weren’t enough, Sebastian founded Scalr, an end-to-end solution to manage application infrastructure deployed across private and public cloud… Read More ?
Possessing a deep technical knowledge in the public and private cloud space, Mark is responsible for optimizing Redapt’s private and hybrid cloud solutions as Chief Technology Officer. Mark continually balances the ecosystem of technology, products, services, and automation with customer requirements for scale, reliability, and performance. | | | Prior to joining Redapt, Mark was VP of Cloud Infrastructure Services and the first operations… Read More ?
UPDATE: Check out the Cloud Economics talk for yourself!
The Cloud-A team attended the OpenStack Summit 12:05PM Track on “Neutron L2 and L3 agents: How They Work and How Kilo Improves Them” by Carl Baldwin of HP and Rossella Sblendido of SUSE. We wanted to arm ourselves with knowledge to help us make informed decisions about our networking upgrade paths and also about the additional value that upgrading to OpenStack Kilo will bring to our customers. The track provided an overview of the L2 and L3 deficiencies before Kilo and the enhancements and performance gains that will be possible in future releases of OpenStack.
Rossella Sblendido discussed the changes in the L2 agent for OpenStack Kilo. In a nutshell, the L2 agents in Kilo will do a better job of listening to changes in OpenvSwitch, the virtual switch used by OpenStack Neutron. This will allow for a smarter Neutron, giving it the ability to respond to networks changes faster, and with less disruptions.
Carl Baldwin, of HP, outlined the changes to the L3 agent for Kilo – and this is where we at Cloud-A are excited! The OpenStack Neutron team have completely rewritten the L3 agent for Kilo. The problem with the L3 agent in the past is it was one massive agent that had to handle all three OpenStack router types – legacy routing, HA routing and distributed virtual routing (DVR). This meant that the agent was clunky and slow because it did all of the work for all three router types. In Kilo, they have written one core, generic routing class and subclasses for all of the specific routing types. Baldwin compared the former L3 agent as the handyman, or the “jack of all trades” who had to know a little about everything and was stretched thin, and the new L3 agent as the “general contractor” who knows which sub-agent to “hire” for the right job. This new architecture should allow for higher performance and reliability in Neutron.
We are excited about these changes at Cloud-A. We are always evaluating new ideas and concepts in our upgrade paths to ensure that we bring the most value possible to our customers. Reliability and performance is key in any networking scenario. We will continue to evaluate these changes in the OpenStack Kilo and take them into consideration in our upgrade paths.
Stay tuned for more inside information from the team attending the conference as we will be posting our thoughts from Vancouver all week long!
The Cloud-A team is making it’s way to Vancouver this week for the OpenStack Summit. We will be setting up shop downtown in an office space, meeting with existing and prospective customers, technology vendors and integration partners. The team is extremely excited to see new and familiar faces.
The whole goal of this trip is to learn how we can better the Cloud-A experience for our users. The OpenStack Summit will allow us to interact with our fellow OpenStack operators and developers to gain perspective on how organizations are running their private or public clouds all over the world. We will be meeting with hardware vendors to educate ourselves on the latest and greatest technology available. As active members and contributors to the OpenStack Community, the Summit will give us the ability to have our say and voice our opinions of the technical direction of the project.
While we have several meetings set up already, we are looking forward to booking more time with customers and partners to get feedback and learn how we can continue to innovate and bring the best public cloud experience to Canada that we can.
Cloud-A team members will be in Vancouver from Friday, May 14 until Friday, May 22. Please email email@example.com to set up a meeting.
Please also look out for Cloud-A representatives at several of the OpenStack Summit evening events.
David Linthicum recently wrote an article for InfoWorld called: Fix your applications before migrating them to the cloud where he discussed the importance of refactoring the architecture of your organization’s applications when moving them to the cloud to take advantage of the features of the underlying cloud platform. Linthicum says:
“Although most enterprises are reluctant to spend the money to redesign and rebuild applications, that fact is you’ll spend the money anyway: If you do not use your public cloud resources effectively, you’ll pay more to operate the applications. That accumulated cost is usually much higher than the cost of refactoring an application in the first place.”
It’s true that “Bad applications moved to the cloud will be bad applications in the cloud.” however the opportunity is greater than solving that challenge in our view. Read more
Although there are some who would say the cloud is redundant to the point that you don’t need to consider a Disaster Recovery Plan (DRP) for it, many of our clients are more comfortable because we offer them a simple and easy ways to protect themselves in this regard. This approach is relatively unique in that we ensure our clients data is always easily accessible to them as a result of our commitment to design to avoid client data lock-in.
Almost all of our customers at Cloud-A have the need to manage backups, archival and disaster recovery data in a variety of different ways. This speaks to the diverse nature of our customers who span a wide array of industries including healthcare, financial services, higher education as well as the government sector. Each customer has their own internal requirements for data protection in addition to the requirements of their clients, and relevant governing bodies. Some customers combine file level backup solutions with periodic snapshots so that they have point-in-time instances of their systems to recover to. Here are some of the more popular backup solutions we’ve seen deployed on Cloud-A so far: