In the end, the panel agreed unanimously that in most cases, public cloud has a more attractive ROI when compared to deploying a private cloud. With that said, the group identified a few scenarios when private cloud may be more attractive.
Stadil, of Scalr, mentioned NASA’s cloud workflows, processing petabytes of data coming down from space every day. The bandwidth required to handle this data is not available from public clouds and therefore NASA requires private cloud infrastructure.
Germany’s strict data residency and privacy laws were discussed as well as the gaming (casino) industry. Both scenarios require absolute control over data which leaves private cloud as the only options in some cases.
The notion that some clients just want to own the data, and the underlying cloud platform was mentioned. This is obviously something that not really anyone can argue with, and isn’t financially driven.
Control was a factor that was brought up by a member of the audience. The audience member stated that some organizations would not want mission critical workflows in the public cloud because they would have to rely on a third party to keep their business running. The panelists disagreed with this fundamentally, with Mark Williams stating that “Outages are a reality, whether public or private cloud.” He went on to say that in most cases outages are caused by humans, which is a factor in both a public cloud and private cloud. Dan Spurling backed this up by saying that when it comes to trusting his internal cloud team vs. a public cloud team , he’ll trust the public cloud team every time, purely for the experience and scale they have.
I personally agree with the panel on this topic, as we are seeing this becoming less and less of an issue with clients at Cloud-A. It is our reputation with superior uptime, and hands on support that allows organizations to trust us with hosting the applications that run their business.
This was a very valuable track session, with many of the “cloud debate” questions that we hear about everyday discussed in real time, by people with real experience. In the end, scenarios and cases were made for both private and public cloud. If cost is the only thing considered, public cloud wins almost every time, but cost is not always the only thing to consider when deciding on your organization’s cloud strategy.
Chief Innovation Officer, Redapt, Redapt
As Chief Innovation Officer, Jeff oversees Cloud Strategy & Architecture at Redapt, he works with IT executives across the country as they solve current and future business problems with solutions that will stand the test of time. Jeff has worked in IT and Operations management consulting for over 15 years, helping companies across all verticals save millions by investing in technology and IT practices that that will meet the needs of both… Read More ?
Co-Founder, CEO of Rancher
Sheng is a co-founder and CEO of Rancher Labs. Prior to starting Rancher, Sheng was CTO of the Cloud Platforms group at Citrix Systems after their acquisition of Cloud.com, where he was co-founder and CEO. Sheng has more than 15 years of experience building innovative technology. He was a co-founder at Teros, which was acquired by Citrix in 2005 and led large engineering teams at SEVEN Networks, and Openwave Systems. Sheng started his… Read More ?
VP – Tech Services, Getty Images
Proven executive with over 16 years of experience leading technology groups and cultivating world-class performance with an enterprise outlook. Extensive track record building and developing teams of high-talent leaders and high-output individuals located both on and off-shore. Holds an MBA from the University of Washington.
Founder, Scalr, Scalr
Sebastian Stadil has been a Cloud developer since 2004, starting with web services for e-commerce and then for computational resources. He founded the Silicon Valley Cloud Computing Group, a user group of over 8000 members that meets monthly to present the latest developments in the industry. | As if that weren’t enough, Sebastian founded Scalr, an end-to-end solution to manage application infrastructure deployed across private and public cloud… Read More ?
Possessing a deep technical knowledge in the public and private cloud space, Mark is responsible for optimizing Redapt’s private and hybrid cloud solutions as Chief Technology Officer. Mark continually balances the ecosystem of technology, products, services, and automation with customer requirements for scale, reliability, and performance. | | | Prior to joining Redapt, Mark was VP of Cloud Infrastructure Services and the first operations… Read More ?
Rossella Sblendido discussed the changes in the L2 agent for OpenStack Kilo. In a nutshell, the L2 agents in Kilo will do a better job of listening to changes in OpenvSwitch, the virtual switch used by OpenStack Neutron. This will allow for a smarter Neutron, giving it the ability to respond to networks changes faster, and with less disruptions.
Carl Baldwin, of HP, outlined the changes to the L3 agent for Kilo – and this is where we at Cloud-A are excited! The OpenStack Neutron team have completely rewritten the L3 agent for Kilo. The problem with the L3 agent in the past is it was one massive agent that had to handle all three OpenStack router types – legacy routing, HA routing and distributed virtual routing (DVR). This meant that the agent was clunky and slow because it did all of the work for all three router types. In Kilo, they have written one core, generic routing class and subclasses for all of the specific routing types. Baldwin compared the former L3 agent as the handyman, or the “jack of all trades” who had to know a little about everything and was stretched thin, and the new L3 agent as the “general contractor” who knows which sub-agent to “hire” for the right job. This new architecture should allow for higher performance and reliability in Neutron.
We are excited about these changes at Cloud-A. We are always evaluating new ideas and concepts in our upgrade paths to ensure that we bring the most value possible to our customers. Reliability and performance is key in any networking scenario. We will continue to evaluate these changes in the OpenStack Kilo and take them into consideration in our upgrade paths.
Stay tuned for more inside information from the team attending the conference as we will be posting our thoughts from Vancouver all week long!
Here is a link to the talk:
Carl Baldwin OpenStack Summit Schedule
Rossella Sblendido OpenStack Summit Schedule
The whole goal of this trip is to learn how we can better the Cloud-A experience for our users. The OpenStack Summit will allow us to interact with our fellow OpenStack operators and developers to gain perspective on how organizations are running their private or public clouds all over the world. We will be meeting with hardware vendors to educate ourselves on the latest and greatest technology available. As active members and contributors to the OpenStack Community, the Summit will give us the ability to have our say and voice our opinions of the technical direction of the project.
While we have several meetings set up already, we are looking forward to booking more time with customers and partners to get feedback and learn how we can continue to innovate and bring the best public cloud experience to Canada that we can.
Cloud-A team members will be in Vancouver from Friday, May 14 until Friday, May 22. Please email email@example.com to set up a meeting.
Please also look out for Cloud-A representatives at several of the OpenStack Summit evening events.
David Linthicum recently wrote an article for InfoWorld called: Fix
“Although most enterprises are reluctant to spend the money to redesign and rebuild applications, that fact is you’ll spend the money anyway: If you do not use your public cloud resources effectively, you’ll pay more to operate the applications. That accumulated cost is usually much higher than the cost of refactoring an application in the first place.”
It’s true that “Bad applications moved to the cloud will be bad applications in the cloud.” however the opportunity is greater than solving that challenge in our view.
What you will need before you get started :
Almost all of our customers at Cloud-A have the need to manage backups, archival and disaster recovery data in a variety of different ways. This speaks to the diverse nature of our customers who span a wide array of industries including healthcare, financial services, higher education as well as the government sector. Each customer has their own internal requirements for data protection in addition to the requirements of their clients, and relevant governing bodies. Some customers combine file level backup solutions with periodic snapshots so that they have point-in-time instances of their systems to recover to. Here are some of the more popular backup solutions we’ve seen deployed on Cloud-A so far: