CloudBolt Software Logo

CloudBolt Blog

7 Takeaways From the Red Hat Summit

Posted by Justin Nemmers

6/19/13 8:27 AM

CloudBolt Booth Red Hat Summit Boston John Menkart Justin Nemmers Colin Thorp Jesse NewellPart of the CloudBolt team at Red Hat Summit 2013.  Sales Director Milan Hemrajani took the picture.

A few sales folks and I have returned from a successful Red Hat Summit in Boston, MA. With over 4,000 attendees, we were able to leverage an excellent booth position to talk to many hundreds of people. One of the things that I love about my role here at CloudBolt is that I am constantly learning. I particularly enjoy speaking with customers about the types of problems they run across in their IT environments, and I take every chance I can to learn more about what their IT challenges are. Some of these are common themes that we hear a lot here at CloudBolt, and a few were a bit surprising as some organizations are still in earlier stages of their modernization efforts that I would have expected.

  1. Not everyone has heavily virtualized his or her enterprise.
    Sure, there are some environments where virtualization doesn’t make a lot of sense—such as parallelized, but tightly CPU-bound workloads, or HPC environments. But what surprised me were the number of organizations I spoke with that made little, or highly limited use of virtualization in the data center. It’s not that they didn’t see the value of it, more often than not, they still made use of Solaris on SPARC, or had old-school management that had not yet bought into the idea that running production workloads on virtualized serves has been a long-accepted common practice. For these folks and others, I’d like to introduce you to a topic I like to call “Cloud by Consolidation” (in a later blog post).
     
  2. Best-of-Breed is back.
    Organizations are tired of being forced to use a particular technology just because it came with another product, or because it comes from a preferred vendor. For example, an IT organization is pressed to use sub-optimal technology because it came with another suite of products. Forcing an ill-fitting product on a problem often results in longer implementation times, which consume more team resources over just implementing the right technology for the problem at hand. Your mechanic will confirm that the right tool makes any job easier. It’s not any different with enterprise software.
    • The gap between things like CloudForms (Formerly ManageIQ) it’s ability to manage OpenStack implementations
    • Nicira Software Defined Networking and the ability to manage it with vCloud Automation Center (vCAC, formerly DynamicOps)Either way, customers are tired of waiting as a result of vendor lock-in.
       
  3. Customers are demanding reduced vendor lock-in.
    IT organizations have a broad range of technologies in their data centers. They need a cloud manager that has the capabilities to effectively manage, not just what they have installed today, but what they want to install tomorrow. For example, a customer might have VMware vCenter today, but is actively looking at moving more capacity to AWS. Alternatively, they have one data center automation tool, and are looking to move to another (see my next point below, #4). Another scenario is not having to wait for disruptive technology to be better supported before getting to implement and test it in your own environment—while being managed with existing technology. Good examples:
     
  4. Customers are increasingly implementing multiple Data Center Automation (DCA) tools. 
    This is a bit interesting in the sense it used to be that an IT organization would purchase a single DCA technology and implement it enterprise-wide. I was surprised to hear the number of customers that were actively looking at a multiple DCA strategy in their environments. Our booth visitors reported that they primarily used HP Server Automation, and to a lesser extent BMC’s BladeLogic. Puppet and Chef were popular tools that organizations are implementing in growth or new environments—like new public cloud environments. Either way, these customers see the definitive value in using CloudBolt C2 to present DCA-specific capabilities to end users, significantly increasing the power of user self-service IT while at the same time decreasing complexity in the environment.
     
  5. Lots of people are talking about OpenStack. Few are using it.
    For every 10 customers that said they were looking at OpenStack, 10 said they were not yet using it. There’s certainly been an impressive level of buzz around OpenStack, but we haven’t seen a significant number of customers that have actively installed and are attempting to use it in their environments. I think that Red Hat’s formal entry into this space will help this, because they have a proven track record of taming the seemingly untamable mix of rapidly-changing open source projects into something that’s supportable in the enterprise. I have no doubt that Red Hat will be able to tame this into something usable. This does not, however, mean that customers will be making wholesale moves from their existing (and largely VMware-based) virtualization platforms to OpenStack. Furthermore, there are still significant market confusion in regards to what Red Hat is selling. Is it RHEV? Is it OpenStack? Do I need both? These are all questions I heard more than once from Customers in Boston.
     
  6. Open and Open Source aren’t the same thing.
    I spent too many years at Red Hat to know that this is the case, but I feel it’s extremely important to mention it here. Many customers told us that they wanted open technologies—but in these cases, open meant tools and technologies that were flexible enough to interoperate with a lot of other technologies, and reduce overall vendor lock-in. Sure, an Open Source development model could be a plus, but the customers were most interested in their tech working, working well, and working quickly.
     
  7. Most IT Orgs want Chargeback, but few businesses are willing to accept it.
    Thus far, the only groups that I’ve chatted with whom actually use some chargeback mechanism are service providers that have external customers. Pretty much every other IT Organization seems to face significant pressure from the businesses they support against chargeback. Showback pricing helps counter this resistance, and over time should help more IT organizations win the battle over chargeback. IT Organizations should be leaping at the chance to collect and report on per-group or project cost reporting. It’s a critical piece of information that businesses need to make effective decisions. Business-Driven IT has been a necessary step in the evolution of IT for a long, long time. IT needs to get with the program and make visible to the business the types of information the business needs to make effective decisions. And on the flip side, the business needs to get with the program and accept that their teams and projects will have to be held responsible for their IT consumption.

So how do you get started recognizing the value of integrating IT with the business? Start here.

We’re looking forward to exhibiting at the next Red Hat Summit, which is slated to be held in San Francisco’s Moscone North and South exhibition center. And if you thought we made a big splash at this year’s summit…  Just wait to see what we have in the works!

Read More

Topics: Virtualization, Cloud, Enterprise, Red Hat, Challenges, Vendors

What is Plain Old Virtualization, Anyway? Not Cloud, That's What.

Posted by Justin Nemmers

5/6/13 3:57 PM

it organization underwater overloaded

I speak with a lot of customers. For the most part, many understand that virtualization is not actually “Cloud”, but rather an underpinning technology that makes cloud (at least in terms of IaaS and PaaS) possible. There are many things that a heavily virtualized environment needs in order to become cloud, but one thing is for certain: “Plain Old Virtualization” needs to learn a lot of new tricks in order to effectively solve the issues facing today’s IT organizations.

Many of those same organizations find themselves constantly underwater when it comes to the expectations from the business they’re tasked with supporting. The business wants X, the IT organization has X-Y resources. Cloud is an important tool that will help narrow this gap, but IT organizations need the right tools to make it happen.

Virtualization Alone is no Longer Sufficient

At plain old virtualization’s core is the virtualization manager. Whether it’s vCenter or XenServer, or some other tool, plain old virtualization lacks the necessary extensibility to get organizations to cloud. Even virtualization managers that have added some capabilities like a self-service portal or metered usage accounting are fundamentally just a hypervisor manager, and typically they only focus on their own virtualization technology.

Plain old virtualization doesn’t understand your business, either. It is devoid of any notion of user or group resource ownership, and lacks the flexibility needed to layer in the organizational structure into the IT environment. Instead of presenting various IT consumption options, plain old virtualization tells an organization how it needs to consume IT—in other words, IT Administrators have to get involved, chargeback isn’t possible, and the technology has little if any understanding of organizational or business structure. 

Plain old virtualization is a solved problem. The value proposition for virtualization is well understood, and accepted in nearly every cross-section of IT. Virtualization managers have matured to enable additional features like high availability, clustering, and live migration, which have allowed IT organizations to remove some unneeded complexity from their stacks.

Failings of Plain Old Virtualization Managers

Many vendors that offer perfectly good plain old virtualization managers are in a process of metamorphosis. They’re adjusting their products, acquiring other technologies, and generally updating and tweaking their virtualization managers so the vendors can claim they “enable cloud”. Whether the new capabilities are added as layered products that are components of a (much) larger solution suite, or merely folding those capabilities into an ever-expanding virtualization manager, the result is a virtualization manager that tries to be more than it is. The customer ultimately pays the price for that added complexity, and often, experiences increased vendor lock-in.

One of the many promises of cloud is that it frees IT organizations to make the most appropriate technology decisions for the business. This is where plain old virtualization that is trying to be cloud really gets an IT organization in trouble. Often, the capabilities presented by these solutions are not sufficient to solve actual IT issues, and the effort to migrate away from those choices is deemed too costly for IT organizations to effectively achieve without significant re-engineering or technology replacement.

CloudBolt Effectively Enables Cloud From Your Virtualization

The good news is that IT organizations don’t need to do entire reboots of existing tech in order to enable cloud in their environments. CloudBolt C2 works in conjunction with existing virtualization managers, allowing IT organizations to present resources to consumers in ways that make sense to both business and consumer alike. C2 does not require organizations to replace their existing virtualization managers; instead, it provides better management and a fully functional and interactive self-service portal so IT consumers can request servers and resources natively.

Flexibility in the management layer is critical, and one place where plain old virtualization tools fall down pretty regularly. C2 is a tool that effectively maps how your IT is consumed to how your business is organized. Try to avoid inflexible tools that offer your IT organization little choice now and even less going forward into the future.

Read More

Topics: IT Challenges, Management, Virtualization, IT Organization

Why Manual VM Provisioning Workflows Don't Work Anymore

Posted by Justin Nemmers

3/25/13 10:40 AM

Let’s look through a fictional situation that likely hits a little close to home.

An enterprise IT shop receives a developer request for a new server resource that is needed for testing. Unfortunately, the developer request doesn’t include all of the information needed to provision the server, so the IT administrator goes back to the developer, and has a discussion about the number of CPUs, and the amount of RAM and storage are needed. That email back-and-forth takes a day. Once that conversation is complete, the IT admin creates a ticket and begins the largely manual workflow of provisioning a server. First, the ticket is assigned to the storage team to create the required storage unit. The ticket is addressed in two days, and then passed on to the network team who ensures that the proper VLANs are created and accessible, and to assign IP addresses. The network team has been pretty busy, though, so their average turnaround is greater than four days. Then, the ticket is handed off back to the virtualization team, where the instance is provisioned, but not until two days later. Think it’s ready to hand off to the user yet?  Not yet.

manual provisioning, workflows, old-school assembly lineAn assembly-line model cannot deploy VMs as rapidly as needed. Automation is required.

The team that manages the virtual environment and creates the VMs is not responsible for installing software. The ticket is forwarded along to the software team, who, three days later, manually installs the needed software on that system, and verifies operation. The virtual server is still not ready to hand off to the developer, though!

You see, there’s also a security and compliance team as well, so the ticket gets handed off to those folks, who a few days later, run a bunch of scans and compliance tests. Now that the virtual resource is in it’s final configuration, it’s got to be ready, right?  Nope. It gets handed off to the configuration management team who then must thoroughly scan the system in order to create a configuration instance in the Configuration Management Database (CMDB). Finally, the instance is finally ready to be delivered to the developer that requested.

The tally is just shy of three full business weeks. What has the developer been doing in the meantime?  Probably not working to his or her full capacity.

Circumventing IT Completely with Shadow IT

Or, maybe that developer got tired of waiting, and after two days went around the entire IT team and ordered an instance from AWS that took five minutes to provision. The developer was so excited about getting a resource that quickly that they bragged to the fellow developers, who in turn start to use AWS.

Negative Effects on IT and the Business

Either way, this is a scenario that plays out repeatedly, and I’m amazed at how frequently it plays out just like this. The result might initially appear to just be some shadow IT, or maybe some VM sprawl from unused deployed instances, however, the potential damage to both the IT organization and the business is far greater.

First, users frequently circumventing the IT organization looks bad. These are actions that question the IT organization’s ability to effectively serve the business, and thus strike at the very heart of the IT group’s relevance.

Furthermore, the IT Consumers are the business. Ensuring that users have access to resources in near-real time should be a goal of every IT org, but rapidly adjusting and transforming the IT teams and processes doesn’t work as quickly as demand changes. This means that the IT org cannot respond with enough agility to continually satisfy the business needs, which in turn potentially means more money is spent to provide less benefit, or even worse, the business misses out on key opportunities.

IT shops need to move beyond simple virtualization and virtualization management. Why? Improved virtualization management does not solve all of the problems presented in the scenario above, while (and this is key here) also providing for continued growth. Implementing tools that only manage virtualization only solve part of the problem, because they are unable to properly unify the provisioning process around software (by going beyond plain template libraries with tools like HPSA, Puppet or Chef), and other external mechanisms (like a CMDB). In order to fully modernize and adapt existing processes and teams to a cloud/service oriented business model, all aspects of the provisioning process must be automated. It’s the only way an IT organization can hope to stay responsive enough, and avoid being locked into one particular solution, such as a single-vendor approach to virtualization. A well-designed and implemented Cloud Manager will give an IT org the freedom to choose the best underlying technology for the job, without regard for how it will be presented to the end user.

Either way you look at it, IT organizations need a solution which will allow them to utilize as much of their existing assets as possible while still providing the governance, security, and serviceability needed to ensure the company’s data and services are well secured and properly supported.

The Solution

Thankfully, there’s just such a Cloud Manager. CloudBolt C2 is built by a team with decades of combined experience in the systems management space, and was created from the beginning to solve this exact problem. Because we started from the first line of code to solve this entire problem, we call ourselves the next-generation cloud manager, but out customers call it a game changer. Give it a download and effortless install today, and we’ll show you that CloudBolt C2 mean business.

Read More

Topics: Customer, IT Challenges, Management, Virtualization, Cloud Manager, Shadow IT, Agility

Next-Generation IT and Greenfield Cloud Infrastructure

Posted by Justin Nemmers

3/12/13 3:06 PM

The problem is consistent. Consistently difficult, that is. As an IT manager, how does one implement new technology in an otherwise running and static environment?  New technology decisions are not just difficult, but the range of questions that rise from thinking about implementation plans can seem daunting. 

Whether you’re talking about switching hardware vendors, or implementing something relatively new like network virtualization, how it’s implemented in your environment will often be more critical to the project’s success than the validity of the technology itself. 

greenfield IT is great IT

Ideally, every environment would be brand new.  How many times have you asked yourself “Wouldn’t it be great if I could just scrap my current infrastructure and start over?”  Fundamentally, greenfield implementations like this are a good route to go for a number of reasons:

  • They allow you to select the best-of-breed and most effective technology to solve the problem at hand
  • You get the valuable opportunity to think about how the technology stack will scale in the future
  • They allow for rapid change while the environment is being built
  • Because there are few barriers, you have the opportunity to investigate other new and upcoming technologies, and you will have time to experiment 

A Cloud Manager provides significant value here.  Using one to unify the management of a lab environment allows the rapid integration of new technologies—technologies that your IT teams need to learn and gain experience with before implementing in the production environment.  Using a Cloud Manager eases the introduction of these technologies, and unified the management interface to make administration more predictable. These tools together help to mold processes and the IT organization into a more agile group.

In my mind, one of the core issues here is that too few IT teams are able to think outside of the box when it comes to implementing new tech. If greenfield implementations are easier than shoehorning new tech into your existing stack, why not give it a shot? Starting with a small base of gear and intelligently growing the installation over time is a great way to migrate capacity. I have an entire different blog post on how to migrate via attrition that is coming soon.  In the meantime, go ahead and identify a few pieces of hardware, install your preferred virtualization tool, download CloudBolt C2, and start piecing together your future architecture.  Once C2 is installed, you’ll be able to quickly layer in additional technologies like Data Center Automation, Network Virtualization, and even other virtualization or Public Cloud resources seamlessly.  

Happy integrating!

Read More

Topics: Virtualization, New Technology, Cloud Manager, Challenges, Implementation, Vendors, Development, Hardware

Cloud Managers Will Change IT Forever

Posted by John Menkart

2/20/13 10:37 AM

In numerous conversations with customers and analysts it has become clear that a consensus across the industry is that Cloud Managers are as game changing for IT as server and network virtualization themselves.  Among those looking longer term at the promise of Cloud Computing (Public, Private and Hybrid), it is clear that the Cloud Manager will become the keystone of value.  Many people’s opinion is that Cloud Managers are the initiator of next major wave of change in IT environments.  How?  Well let’s look to the past to predict the future.

Proprietary Everything

Back in the early 80’s, general purpose computers were first spreading across the business environment. These systems were in the form of fully-proprietary Mainframes, Minicomputers. The hardware (CPU, Memory, Storage, etc.), Operating Systems and any even any available software were all from the specific computer manufacturer (vendors included DEC, Prime, Harris, IBM, HP, DG amongst others).  Businesses couldn’t even acquire compilers for their systems from a third party.  They were only available from the system’s manufacturer.

Commodity OS leads to Commodity Hardware

Agility and maturity of IT environments step 1

The advent of broad interest and adoption of Unix started a sea change in the IT world.  As more hardware vendors supported Unix it became easier to migrate from one vendor’s system to another.  Additionally, vendors began building their systems based on commodity x86-compatible microprocessors as opposed to building proprietary CPU architectures optimized around their proprietary OS.

Architecture-compatible hardware not only accelerated the move to commodity OS (Unix, Linux and Windows), but in turn, increased pressure on vendors to fully commoditize server hardware.  The resulting commoditization of hardware systems steeply drove down prices.  To this day, server hardware largely remains a commodity.

Virtualization Commoditizes Servers

 

Agility and maturity of IT environments step 2

Despite less expensive commodity operating systems and commodity hardware, modernizing enterprise IT organizations were still spending large sums on new server hardware in order to accommodate the rapidly growing demand of new applications.  In large part, IT organizations had a problem taking full advantage of the hardware resources they are spending on.  Server utilization become a real issue.  Procurement of servers still took a considerable amount of time due to organizational processes.  Every new server required a significant amount of effort to purchase, rack and stack, and eventually deploy.  Power and cooling requirements became a significant concern.  The integration of storage, networking, and software deployment and maintenance still caused considerable delays into workflows that are reliant on new hardware systems.

Server virtualization arrives commercially in the late 1990’s and starts getting considerable traction in the mid 2000’s.  Virtualization of the underlying physical hardware provides an answer to the thorny utilization issue by enabling multiple individual server workloads that have low individual utilization to be consolidated on a single physical server.  Virtualization also provides a limited  solution for the  the procurement problem, and helps with the power and cooling issues posed by rampant hardware server growth. Areas of networking, storage, and application management remain disjointed, and typically still require similar times to effectively implement as before the advent of virtualization thus becoming a major impediment to flexibility in the enterprise IT shops.

Now we find ourselves in 2013.  Most enterprise IT shops have implemented some level of virtualization. All of the SaaS and Cloud-based service providers have standardized on virtualization. Virtual servers can be created rapidly and at no perceived cost other than associated licenses, so VM Servers are essentially a commodity, although the market share for the underlying (enabling) technology is clearly in VMware’s favor at this point.

The problem with these commodity VM servers is that making them fully available for use still hinges on integrating them with other parts of the IT environment that are far from commodity and complex to configure.  The VM’s dependency on network, automation tools, storage, etc. hinder the speed and flexibility of the IT group to configure and provide rapid access to these resources for the business.

Network Virtualization arrives

A huge pain point in flexibly deploying applications and workloads is the result of networking technology still being largely based on the physical configuration of network hardware devices across the enterprise. The typical enterprise network is both complex and fragile, which is a condition that dos not encourage rapid change in the network layer to accommodate business or mission application requirements. An inflexible network which is available is always preferred to a network that failed because of unintended consequences of a configuration change.

In much the same way as Server Virtualization abstracted the server from the underlying hardware, Network virtualization completely abstracts the logical network from the physical network.  Using network virtualization it is now possible to free the network configuration from the physical devices, enabling rapid deployment of new, and more efficient management of existing virtual networks.  Rapid adoption of network virtualization technology in the future is all but guaranteed.

Commoditizing all IT resources and compute

 

Agility and maturity of IT environments step 3

With both network and server virtualization, we are closer than ever to the real benefit of 'Cloud Computing': the promise of  fully commoditized IT resources and compute.  To get there, however, we need to coordinate and abstract the management and control the modern enterprises’ internal IT resources and compute resources being consumed in external public cloud providers.

To enable rapid and flexible coordination of the IT resources, the management of those enterprise application resources must be abstracted from the underlying tools.  The specific technologies (server virt, network virt, automation, storage, public cloud provider, etc.) involved are viewed as commodity, and can be exchanged or deprecated without negatively affecting the business capabilities of the enterprise IT. Additionally this abstraction allows the IT organization to flexibly adopt new and emerging technologies to add functionality and capability without exposing the business to the often sharp edges leading edge technology.

The necessary resource abstraction and control is the domain of the not just the virtualization manager-- but really the Cloud Manager. In short, the Cloud Manager commoditizes compute by commoditizing the IT resources across the enterprise and beyond.

With such an important role it is no wonder that every vendor wants to pitch a solution in this space. The orientation or bias of the various vendors’ approaches in developing a Cloud Manager for enterprise IT will play a critical role in the ultimate success of the products and customers that implement them.

Read More

Topics: Network Virtualization, IT Challenges, Virtualization, Cloud Manager, John, Enterprise, IT Organization, Agility, Compute, Hardware

CloudBolt Releases C2 v3.6.0

Posted by Justin Nemmers

2/14/13 3:31 PM

We're happy to announce the release of CloudBolt C2 v3.6.0!

Building on the ground-breaking Network Virtualization capabilites we released in v3.5.0, we've created the ability to directly manage network virtualization-provided layer 3 networking (i.e. routing) directly from C2.

C2 also now supports KVM-QEMU, further expanding the supported virtualization platforms that it centrally manages.

We've also added many more visual cues throught the user interface. You'll now see appropriate vendor icons for items including resource handlers like VMware vSphere and vCenter, AWS and QEMU, as well as Operating Systems, Configuration Management systems (Puppet, Chef, HP Server Automation), and Network Virtualization (Nicira by VMware).

Do you have a large number of users, but don't want to connect C2 to LDAP or Active Directory? Not a problem anymore, as C2 can now import users from a csv file.

From the beginning, CloudBolt enables plain old virtualzation environments to provide resources in a Cloud-ified manner: virtualization becomes Infrastructure as a Service and Platfform as a Service.  Strating with v3.6.0, we enable users to request multiple servers from multiple environments.  Previously, they could request multiple servers, but only from on envronment at a time.

We've also invested a bunch of time in performance tuning the UI, including adding capabilities to filter the server list by the OS family a server belongs to.

C2 can also now query a Configuration Management system to determine which virtual machines in your environment are also managed by a supported CM system so that C2 can enable more fine-grained application and life-cycle management of those VMs.

Ready to upgrade?  Hit up our support portal (login required) for details.  Want to kick the tires?  Request a download now!

Download C2

Read More

Topics: Nicira, Network Virtualization, Feature, Management, Virtualization, VMware, Cloud Manager, Upgrade, Release Notes

VM Sprawl’s Effects on the Processor/Performance Curve is Significant

Posted by Justin Nemmers

2/8/13 1:24 PM

Over at Information Week, Jim Ditmore discusses how advances in CPU power efficiency will eventually save businesses significant data center costs.

It’s certainly a compelling case, but there’s an assumption being made—that VM count will not grow at the same pace as the gains in hardware efficiency.

Many customers I speak with are certainly excited about the prospects of more efficient data centers, both in terms of CPU performance and efficiency.  One common problem they’re butting up against, however, is that of VM sprawl. Unused or under-utilized VMs in an environment have a significant impact on the overall efficiency numbers that an IT organization can expect to see.  If VM count increases at the same rate as the processor/efficiency curve, then the net result will be as it is now: the amount of required hardware to sustain the existing load will increase. 

To his credit, Jim comes close to calling this point out: 

“You'll have to employ best practices in capacity and performance management to get the most from your server and storage pools, but the long-term payoff is big. If you don't leverage these technologies and approaches, your future is the red and purple lines on the chart: ever-rising compute and data center costs over the coming years.”

efficiency power vm sprawl effects
Efficiency doesn't matter when VM Sprawl consumes additional capacity provided by more powerful and efficient CPUs.

But that’s still making the assumption that It organizations are well-prepared to effectively solve the issue of VM sprawl.  For many of the customers I work with, that’s a pretty big assumption.  IT Organizations are well aware of the impact of sprawl, but have few tools to effectively combat it in a reliable and consistent manner.  Additionally, the sustained effort required to maintain a neat-and-tidy virtualization environment (at least regarding sprawl) is often great, placing much pressure on an IT organization that’s likely already seen as lacking agility and responsiveness to the business.

The default solution, which of course is rife with issues, to this struggle is well known, and relatively easy:  Throw more hardware at the problem.  Or push workloads to the public cloud.  Either way, it’s just a Band-Aid, at best and does nothing to contain costs into the future.

The only way for IT organizations to benefit in an ongoing basis from the processor performance/efficiency curve is to effectively control sprawl in the virtual environment.

And how does one do that?  With a Cloud Manager like CloudBolt C2.

 

Get C2 Product Overview
Read More

Topics: Public Cloud, IT Challenges, Virtualization, Cloud Management, Agility, Hardware

Better Leverage your IT Operations with a Cloud Management Platform

Posted by Justin Nemmers

12/17/12 2:10 PM

 

Call it what you'd like: Cloud Management Platform, Cloud Manager, or even a Virtualization Manager, but one fact remains-- technology that unifies the management and provisioning of your IT environment is here to stay.  If you're not already looking at one to help reduce I&O expenses, you should be. 

At a high level, Cloud Management Platforms such as CloudBolt Software's Command and Control (C2) provide a layer of management on top of existing virtual environments.  We drive a greater value from existing virtualization and data enter automation/configuration management tools by unifying the management and visibility.  A world-class Cloud Management Platform provides visibility and control of your IT environment mapped to your business via a single pane-of-glass.  That sort of cohesive internal management makes IT more agile and can lower costs.

Of course, we can't stop there.  In a recent Gartner poll, 47% of IT groups planned to have some type of hybrid cloud deployment by 2015-- so any reasonably capable Cloud Management Platform must also provide the same level of management and oversight over public cloud instances and deployments.

Together, these capabilities allow IT line of business owners to rapidly transform their organization into a more agile part service-provider and part cloud broker.

This, according to Gartner, is the likely path of successful IT organizations that are under a near-constant assault from public cloud providers such as Amazon Web Services and Rackspace.  There is no area of business IT that is safe from this attack.  Public cloud vendors in the US and other countries are even now racing to stand up government-only environments, which meet the copious requirements—which often amount to pages of standards required to host many government applications.

Blue CloudBolt Cloud Manager Hero

Transformation must happen.

To me, the conclusion for any business leader in IT seems to be crystal clear:  transform or be transformed.  This is where effective cloud management comes into play.  By all reasonable accounts, you're either already virtualizing, or have virtualized most workloads capable of being hosted in this model.  The task does not end there.  You must transform your IT organization to be more agile-- more able to respond to the needs of the greater business in less time, and lower cost than the big public providers are able to accomplish.  The only path to this situation is effective cloud management.  

It's a little more complicated than just providing a self-service portal to your users.  There is still significant amounts of policy that need to be updated and adapted to a cloud-enteric world, be it private, public, or hybrid.  You need to continue to provide end-to-end life cycle management, and must be able to deliver the applications that end users need, while doing so in a manner that is controlled and well understood.  

Our recommendation?  Layer in a cloud management technology that will work with what you have in place today.  This way, you have the ability to adopt new technology and adjust your processes gradually, and as it makes sense, vs. the infrequently successful and high cost approach of large infrastructure deployment and vendor lock-in.  Gradually molding your business processes into a cloud-centric model produces a higher success rate, as it provides you the ability to gain immediate benefit: getting resources to users more rapidly, but while also allowing you to plan how best to further implement your newfound flexibility.  This approach solves many common needs:

  • Move dev/test to public cloud resources?  
  • Implement resource quotas? Modify approval processes?
  • Make more public cloud resources available as needed?
  • Implement charge or show-back accounting for resource consumption?   

Have a look at our Cloud Management Platform.  We think you'll find it not only has industry-leading capabilities, but will be easier to use and integrate into your existing processes than literally anything else out there, and at a CapEx and OpEx that cannot be beat by anyone in the industry.  But don't take our word for it-- download it for free and get started today.

Read More

Topics: Corporate, IT Challenges, Consumability, Virtualization, People

Next step: Cloud Management to Commoditize the Compute

Posted by Justin Nemmers

11/27/12 10:02 AM

Proprietary hardware and processor architectures have been outpaced and eventually replaced by commoditized Intel and AMD x86 and x86_64 hardware platforms. Along the same lines, Linux and other open-source operating systems fully commoditized the operating system, ensuring that the underlying architecture really isn’t too important anymore. The next logical step in this was to further commoditize the operating system and hardware platform together—which is what server virtualization does. Server virtualization makes the hardware the ultimate commodity. Often the management layers of the best virtualization platforms are intelligent enough to even appropriately handle differing processor specifications and memory configurations on the hypervisors.

Commodity Cloud x86 CPU

The Intel-based x86 chip revolutionized IT.

A fully server-virtualized environment, however, is still reliant on several things:

  1. The administrators are still required to know and understand where and what is being deployed virtually. 
  2. When a user makes a request for a resource, it’s got to be put somewhere, and that underlying virtualization technology is something that has to be dealt with, understood, and eventually manipulated in a manner to deploy the requested resource. 
  3. The idea of IaaS and PaaS disrupts this a fair amount, but there’s still a choice—a implicit understanding that your requested compute resource is dependent on a single underlying technology, be it from EC2, Google Compute Engine, VMware, RHEV, Xen, Hyper-V, or anything else.

The next step in organizational IT maturity has to be the full commoditization of that compute layer. Just as organizations can now procure commodity servers and storage from a variety of vendors, and abstract that hardware choice using virtualization, so must the actual virtualization technologies be abstracted from the end user. In the end, this makes sense. Users don’t need to know or care where their compute is coming from. They just want access to the resources and services they’ve requested when they requested them. Just as administrators have the ability to choose amongst various hardware providers without affecting users, they should be able to choose amongst differing physical locations, virtualizations technologies, and even cloud providers.

This is where CloudBolt steps in. CloudBolt C2 commoditizes the compute layer. Regardless of the virtualization or cloud technology present, CloudBolt C2 makes compute resources available to users. This is regardless of what the underlying virtualization technology is, where the resources are located, and increasingly, without concern of what that underlying hardware architecture is.

A commoditized compute layer is interesting, but when coupled with end-user self-service, an IT organization has the ability to introduce a tremendous amount of IT agility into the organization.

Give us a call and let’s chat about where your organization is along the path of providing automated self-service infrastructure, applications, and services to your users. We’ll show you how CloudBolt C2 can revolutionize how you look at compute and manage your resources, wherever they are located.

 

Read More

Topics: IT Challenges, Virtualization, IaaS, Compute