CloudBolt Software Logo

CloudBolt Blog

7 Takeaways From the Red Hat Summit

Posted by Justin Nemmers

6/19/13 8:27 AM

CloudBolt Booth Red Hat Summit Boston John Menkart Justin Nemmers Colin Thorp Jesse NewellPart of the CloudBolt team at Red Hat Summit 2013.  Sales Director Milan Hemrajani took the picture.

A few sales folks and I have returned from a successful Red Hat Summit in Boston, MA. With over 4,000 attendees, we were able to leverage an excellent booth position to talk to many hundreds of people. One of the things that I love about my role here at CloudBolt is that I am constantly learning. I particularly enjoy speaking with customers about the types of problems they run across in their IT environments, and I take every chance I can to learn more about what their IT challenges are. Some of these are common themes that we hear a lot here at CloudBolt, and a few were a bit surprising as some organizations are still in earlier stages of their modernization efforts that I would have expected.

  1. Not everyone has heavily virtualized his or her enterprise.
    Sure, there are some environments where virtualization doesn’t make a lot of sense—such as parallelized, but tightly CPU-bound workloads, or HPC environments. But what surprised me were the number of organizations I spoke with that made little, or highly limited use of virtualization in the data center. It’s not that they didn’t see the value of it, more often than not, they still made use of Solaris on SPARC, or had old-school management that had not yet bought into the idea that running production workloads on virtualized serves has been a long-accepted common practice. For these folks and others, I’d like to introduce you to a topic I like to call “Cloud by Consolidation” (in a later blog post).
     
  2. Best-of-Breed is back.
    Organizations are tired of being forced to use a particular technology just because it came with another product, or because it comes from a preferred vendor. For example, an IT organization is pressed to use sub-optimal technology because it came with another suite of products. Forcing an ill-fitting product on a problem often results in longer implementation times, which consume more team resources over just implementing the right technology for the problem at hand. Your mechanic will confirm that the right tool makes any job easier. It’s not any different with enterprise software.
    • The gap between things like CloudForms (Formerly ManageIQ) it’s ability to manage OpenStack implementations
    • Nicira Software Defined Networking and the ability to manage it with vCloud Automation Center (vCAC, formerly DynamicOps)Either way, customers are tired of waiting as a result of vendor lock-in.
       
  3. Customers are demanding reduced vendor lock-in.
    IT organizations have a broad range of technologies in their data centers. They need a cloud manager that has the capabilities to effectively manage, not just what they have installed today, but what they want to install tomorrow. For example, a customer might have VMware vCenter today, but is actively looking at moving more capacity to AWS. Alternatively, they have one data center automation tool, and are looking to move to another (see my next point below, #4). Another scenario is not having to wait for disruptive technology to be better supported before getting to implement and test it in your own environment—while being managed with existing technology. Good examples:
     
  4. Customers are increasingly implementing multiple Data Center Automation (DCA) tools. 
    This is a bit interesting in the sense it used to be that an IT organization would purchase a single DCA technology and implement it enterprise-wide. I was surprised to hear the number of customers that were actively looking at a multiple DCA strategy in their environments. Our booth visitors reported that they primarily used HP Server Automation, and to a lesser extent BMC’s BladeLogic. Puppet and Chef were popular tools that organizations are implementing in growth or new environments—like new public cloud environments. Either way, these customers see the definitive value in using CloudBolt C2 to present DCA-specific capabilities to end users, significantly increasing the power of user self-service IT while at the same time decreasing complexity in the environment.
     
  5. Lots of people are talking about OpenStack. Few are using it.
    For every 10 customers that said they were looking at OpenStack, 10 said they were not yet using it. There’s certainly been an impressive level of buzz around OpenStack, but we haven’t seen a significant number of customers that have actively installed and are attempting to use it in their environments. I think that Red Hat’s formal entry into this space will help this, because they have a proven track record of taming the seemingly untamable mix of rapidly-changing open source projects into something that’s supportable in the enterprise. I have no doubt that Red Hat will be able to tame this into something usable. This does not, however, mean that customers will be making wholesale moves from their existing (and largely VMware-based) virtualization platforms to OpenStack. Furthermore, there are still significant market confusion in regards to what Red Hat is selling. Is it RHEV? Is it OpenStack? Do I need both? These are all questions I heard more than once from Customers in Boston.
     
  6. Open and Open Source aren’t the same thing.
    I spent too many years at Red Hat to know that this is the case, but I feel it’s extremely important to mention it here. Many customers told us that they wanted open technologies—but in these cases, open meant tools and technologies that were flexible enough to interoperate with a lot of other technologies, and reduce overall vendor lock-in. Sure, an Open Source development model could be a plus, but the customers were most interested in their tech working, working well, and working quickly.
     
  7. Most IT Orgs want Chargeback, but few businesses are willing to accept it.
    Thus far, the only groups that I’ve chatted with whom actually use some chargeback mechanism are service providers that have external customers. Pretty much every other IT Organization seems to face significant pressure from the businesses they support against chargeback. Showback pricing helps counter this resistance, and over time should help more IT organizations win the battle over chargeback. IT Organizations should be leaping at the chance to collect and report on per-group or project cost reporting. It’s a critical piece of information that businesses need to make effective decisions. Business-Driven IT has been a necessary step in the evolution of IT for a long, long time. IT needs to get with the program and make visible to the business the types of information the business needs to make effective decisions. And on the flip side, the business needs to get with the program and accept that their teams and projects will have to be held responsible for their IT consumption.

So how do you get started recognizing the value of integrating IT with the business? Start here.

We’re looking forward to exhibiting at the next Red Hat Summit, which is slated to be held in San Francisco’s Moscone North and South exhibition center. And if you thought we made a big splash at this year’s summit…  Just wait to see what we have in the works!

Read More

Topics: Virtualization, Cloud, Enterprise, Red Hat, Challenges, Vendors

Have Your Cloud Vendors Spent Time in the Data Center?

Posted by Justin Nemmers

5/30/13 5:34 PM

data center vendor virtualization and cloud planning

As part of a project kickoff meeting yesterday, I walked through a massive data center in Northern VA. It’s the same one that houses huge portions of the Amazon Web Services’ us-east region, amongst nearly every other major ‘who’s who’ of the Internet age.

Of the various people that accompanied me on this tour, there were several that marveled at both the expansive magnitude, as well as the seemingly strict order of cages, racks, and hallways alike. Seeing this all through the eyes of folks that had not been in a data center before got me thinking about a simple question: “Has your vendor spent time in a data center?”

I pose this question both literally and figuratively. For enterprises, the data center is more than just a location. The data center encompasses not just a location, but business logic, processes, software, licenses, infrastructure, personnel, technology, and data. Saying something is “in the data center” imparts a certain gravity, meaning that a person has implied capability, responsibility, and knowledge. For a technology, being in the data center means that it’s likely a critical component of the business. By being “in the data center”, a technology has most likely met numerous standards for not just functionality to the business, but also reliability and security.

When it comes to IT environments, then, there are really two categories when it comes to the data center. Those technologies, people, and businesses that have experience working in one, and everyone and everything else. In no place is this notion more important, and true, than in Enterprise IT.

Innovation happens in the data center because of the unique problems encountered with IT at scale. If a vendor is not familiar with the types of issues organizations face at the data center scale, they’ll likely discover numerous limitations in capability and process alike. Furthermore; and a bit more insidious, is that the vendor doesn’t understand how IT organizations interact with and otherwise manage the data center environment in the first place.

Actual results may vary, but I’d venture that many solutions born in places other than the data center tend to cost more to implement, and have more thorny integration issues than promised—as a likely result of forcing the hand of IT organizations and business to effectively and wholly change their approach rather than just presenting a technology that fuses well with existing process, then presenting those same organizations with the choice of when and how to evolve.

IT organizations need solutions born in and for the data center. Looking to a team that has significant experience building, managing, selling to, and supporting the data center environment can be a significant benefit to IT organizations. Thankfully, CloudBolt is just one such company with substantial data center experience. This results in C2 being designed and built with the data center in mind. This has several effects:

  • For one, we’ll understand your actual problem, not the problem we, as a vendor, want you to have.
  • Two, we’ll be dis-inclined to wedge our product in places where it doesn’t fit well merely because we know what it takes to support a product in the enterprise, and we definitely don’t want to support an ill-fitting product in a data center.
  • Lastly, we’ll allow you to both implement the new tool, and continue to keep up business-as-usual. No sweeping, massive change required up front.

Collectively, our team has spent over 40 person-years in the data center. It shows in how we interact with customers, and it definitely shows in our product. 

Why not give our whitepaper a once-over, and then take C2 for a test drive?

Read More

Topics: Enterprise, Challenges, Vendors, Data Center

The C2 Cloud Manager Value Play: IT in a Business Context

Posted by Justin Nemmers

5/13/13 12:53 PM

 car fleet cloud manager CFO and CTO

The march toward simplicity in technology and data centers is one that grows more difficult with every technical innovation that occurs. For years, CIOs and IT managers have maintained that standardization on a select provider’s toolset will help simplify their IT enterprise. “Standardize!  Reduced fragmentation will set you free,” the typical IT vendors will shout. However, reality is just not that simple. I’ve made some other cases for why the mentality of strict standardization isn’t necessarily all it’s cracked up to be, but I’m going to take a different approach this time. 

One problem that I hear pretty frequently when talking to customers’ non-IT leadership and management is that they frequently lament that their IT organizations just don’t understand how the business actually needs to; not just consume IT, but also track and measure various metrics from an IT organization in ways that make sense to the business.

Let’s look at this a bit more practically for a moment. I used this analogy with a CFO last week, and it resonated well in describing the real issue that the non-IT leadership types have with IT as a whole.

In a large pharmaceutical company, there is a fleet of company-owned cars. A recall is needed on one year of a particular model because of poor paint quality. Upon learning that information, the fleet manager can not only tell you exactly how many of those cars she has in her fleet, but also tell you exactly who each car is assigned to, which ones are green, and the home address of that car. The fleet manager is able to present information about her part of the business in a way that makes sense to management. How is it that IT does not operate with the same level of intelligence?

Now let’s apply the same thought process to IT. The CFO wants to know what percentage of the IT budget is being used by a particular project. Enter the IT organization. The real numbers behind the CFO’s request are daunting. The IT organization is juggling thousands of VMs, different licensing models and costs for software, different hardware, multiple data center locations, and a convoluted org chart, just to name a few. Different environments have different cost structures, and therefore add complexity to reporting because of the requirement to understand not just what a VM is running, but where it is running.

And that’s a relatively uncomplicated example. What happens when you start to add things like applications, software licenses, configuration management tools (HP SA! Puppet! Chef! Salt Stack!), multiple data centers, differing virtualization technologies (VMware! Xen! KVM!), multiple versions of the same technology, multiple project teams accessing shared resources, multiple Amazon web services public cloud accounts, etc.

From a seemingly simple request, we have revealed the main frustration that the non-IT leadership faces nearly every time they have a seemingly simple request. At core to the problem is that the IT processes and technologies were not built in a way to provide this transparency. Instead, technologies such as virtualization, cloud, networking, etc. were designed and implemented to provide high availability, and meet an SLA. They were not designed to offer reporting transparency, or cost accountability. The end result:  IT the Business cannot understand IT, and vice versa.

The good news is that the capabilities needed to resolve this imbalance are present today. When implemented in an environment, CloudBolt enables IT managers to answer the questions their non-IT leadership is asking. “CloudBolt enables IT in a Business Context”. CloudBolt C2 solves more than just the problems that CIO, CTO, and IT Directors and Managers have. For the first time, C2 enables the non-tech leadership to view IT in a way that’s analogous to how they look at any other portion of their business, which is both good for business, and IT. 

It’s time for IT in a Business Context. It’s time for Business-Driven IT.

Take a look at our Benefits Overview, and see how we can make a difference today.

Read More

Topics: IT Challenges, Enterprise, Business Challenges, IT Organization, Vendors

Cloud Managers Will Change IT Forever

Posted by John Menkart

2/20/13 10:37 AM

In numerous conversations with customers and analysts it has become clear that a consensus across the industry is that Cloud Managers are as game changing for IT as server and network virtualization themselves.  Among those looking longer term at the promise of Cloud Computing (Public, Private and Hybrid), it is clear that the Cloud Manager will become the keystone of value.  Many people’s opinion is that Cloud Managers are the initiator of next major wave of change in IT environments.  How?  Well let’s look to the past to predict the future.

Proprietary Everything

Back in the early 80’s, general purpose computers were first spreading across the business environment. These systems were in the form of fully-proprietary Mainframes, Minicomputers. The hardware (CPU, Memory, Storage, etc.), Operating Systems and any even any available software were all from the specific computer manufacturer (vendors included DEC, Prime, Harris, IBM, HP, DG amongst others).  Businesses couldn’t even acquire compilers for their systems from a third party.  They were only available from the system’s manufacturer.

Commodity OS leads to Commodity Hardware

Agility and maturity of IT environments step 1

The advent of broad interest and adoption of Unix started a sea change in the IT world.  As more hardware vendors supported Unix it became easier to migrate from one vendor’s system to another.  Additionally, vendors began building their systems based on commodity x86-compatible microprocessors as opposed to building proprietary CPU architectures optimized around their proprietary OS.

Architecture-compatible hardware not only accelerated the move to commodity OS (Unix, Linux and Windows), but in turn, increased pressure on vendors to fully commoditize server hardware.  The resulting commoditization of hardware systems steeply drove down prices.  To this day, server hardware largely remains a commodity.

Virtualization Commoditizes Servers

 

Agility and maturity of IT environments step 2

Despite less expensive commodity operating systems and commodity hardware, modernizing enterprise IT organizations were still spending large sums on new server hardware in order to accommodate the rapidly growing demand of new applications.  In large part, IT organizations had a problem taking full advantage of the hardware resources they are spending on.  Server utilization become a real issue.  Procurement of servers still took a considerable amount of time due to organizational processes.  Every new server required a significant amount of effort to purchase, rack and stack, and eventually deploy.  Power and cooling requirements became a significant concern.  The integration of storage, networking, and software deployment and maintenance still caused considerable delays into workflows that are reliant on new hardware systems.

Server virtualization arrives commercially in the late 1990’s and starts getting considerable traction in the mid 2000’s.  Virtualization of the underlying physical hardware provides an answer to the thorny utilization issue by enabling multiple individual server workloads that have low individual utilization to be consolidated on a single physical server.  Virtualization also provides a limited  solution for the  the procurement problem, and helps with the power and cooling issues posed by rampant hardware server growth. Areas of networking, storage, and application management remain disjointed, and typically still require similar times to effectively implement as before the advent of virtualization thus becoming a major impediment to flexibility in the enterprise IT shops.

Now we find ourselves in 2013.  Most enterprise IT shops have implemented some level of virtualization. All of the SaaS and Cloud-based service providers have standardized on virtualization. Virtual servers can be created rapidly and at no perceived cost other than associated licenses, so VM Servers are essentially a commodity, although the market share for the underlying (enabling) technology is clearly in VMware’s favor at this point.

The problem with these commodity VM servers is that making them fully available for use still hinges on integrating them with other parts of the IT environment that are far from commodity and complex to configure.  The VM’s dependency on network, automation tools, storage, etc. hinder the speed and flexibility of the IT group to configure and provide rapid access to these resources for the business.

Network Virtualization arrives

A huge pain point in flexibly deploying applications and workloads is the result of networking technology still being largely based on the physical configuration of network hardware devices across the enterprise. The typical enterprise network is both complex and fragile, which is a condition that dos not encourage rapid change in the network layer to accommodate business or mission application requirements. An inflexible network which is available is always preferred to a network that failed because of unintended consequences of a configuration change.

In much the same way as Server Virtualization abstracted the server from the underlying hardware, Network virtualization completely abstracts the logical network from the physical network.  Using network virtualization it is now possible to free the network configuration from the physical devices, enabling rapid deployment of new, and more efficient management of existing virtual networks.  Rapid adoption of network virtualization technology in the future is all but guaranteed.

Commoditizing all IT resources and compute

 

Agility and maturity of IT environments step 3

With both network and server virtualization, we are closer than ever to the real benefit of 'Cloud Computing': the promise of  fully commoditized IT resources and compute.  To get there, however, we need to coordinate and abstract the management and control the modern enterprises’ internal IT resources and compute resources being consumed in external public cloud providers.

To enable rapid and flexible coordination of the IT resources, the management of those enterprise application resources must be abstracted from the underlying tools.  The specific technologies (server virt, network virt, automation, storage, public cloud provider, etc.) involved are viewed as commodity, and can be exchanged or deprecated without negatively affecting the business capabilities of the enterprise IT. Additionally this abstraction allows the IT organization to flexibly adopt new and emerging technologies to add functionality and capability without exposing the business to the often sharp edges leading edge technology.

The necessary resource abstraction and control is the domain of the not just the virtualization manager-- but really the Cloud Manager. In short, the Cloud Manager commoditizes compute by commoditizing the IT resources across the enterprise and beyond.

With such an important role it is no wonder that every vendor wants to pitch a solution in this space. The orientation or bias of the various vendors’ approaches in developing a Cloud Manager for enterprise IT will play a critical role in the ultimate success of the products and customers that implement them.

Read More

Topics: Network Virtualization, IT Challenges, Virtualization, Cloud Manager, John, Enterprise, IT Organization, Agility, Compute, Hardware

The Language Behind CloudBolt C2 - a Powerful Combination

Posted by Bernard Sanders

2/4/13 8:23 AM

A couple of us were speaking to an industry analyst the other day who was asking about our technology stack when he remarked:

The selection of Python and Django three years ago was either truly visionary or borderline crazy, but it’s exactly what you would choose if you were to start today.” – Bernd Harzog, The Virtualization Practice

I’d like to take the credit associated with the visionary part of that statement, but the truth is that Python has been a solid, logical choice for enterprise development longer than it usually gets credit for.   Python has been used for years at the core of intensive, production systems by everyone from Google to NASA, and it has performed admirably under the pressure.

Some might say that the choice of a back-end technology should be irrelevant to consumers of a product, but the fact is that a language, though unseen by end users, makes a huge impact on their experience.  A language should inspire a development team to deliver functionality quickly and reliably and allow engineers to focus simultaneously on the dual goals of robust architecture and an excellent end user experience.  In a similar way to how dogs and their owners tend to start looking like one another after years together, programmers’ thought patterns and behaviors are influenced by the attributes of the language and framework they use every day.  For example:

  • C programmers think overwhelmingly in computer science terms, at the expense of user experience.
  • .Net teams tend to think excessively graphically, at the expense of creating architectures that are not as interoperable and ready for integration and scale as they should be.
  • Perl encourages developers to think of the most obfuscated way of accomplishing a task, rather than the most transparent.

Development Language Owner dog1Development Language Owner dog4Development Language Owner dog3Development Language Owner dog1

Just as dogs and their owners begin to look alike, developers begin to think in their "native" language (photos courtesy of Cesar)

In contrast to other options, Python influences programmers to constantly consider extensibility, simplicity of design, ease of installation/management, and the principle of least surprise, all through the example that it sets.  It does this while enabling more rapid and responsive development than any other language I have used. 

Though CloudBolt C2 was introduced later than some other cloud management systems, we have seen it leapfrog other solutions and gain acknowledgement as being easier to install, more flexible and scalable, and sporting a cleaner and simpler user interface than other products in the space.  There are several factors that enabled us to surpass other solutions, but at the core of these is the Python programming language.

Read More

Topics: Innovation, Feature, Enterprise, Development, Bernard