CloudBolt Software Logo

CloudBolt Blog

Next-Generation IT and Greenfield Cloud Infrastructure

Posted by Justin Nemmers

3/12/13 3:06 PM

The problem is consistent. Consistently difficult, that is. As an IT manager, how does one implement new technology in an otherwise running and static environment?  New technology decisions are not just difficult, but the range of questions that rise from thinking about implementation plans can seem daunting. 

Whether you’re talking about switching hardware vendors, or implementing something relatively new like network virtualization, how it’s implemented in your environment will often be more critical to the project’s success than the validity of the technology itself. 

greenfield IT is great IT

Ideally, every environment would be brand new.  How many times have you asked yourself “Wouldn’t it be great if I could just scrap my current infrastructure and start over?”  Fundamentally, greenfield implementations like this are a good route to go for a number of reasons:

  • They allow you to select the best-of-breed and most effective technology to solve the problem at hand
  • You get the valuable opportunity to think about how the technology stack will scale in the future
  • They allow for rapid change while the environment is being built
  • Because there are few barriers, you have the opportunity to investigate other new and upcoming technologies, and you will have time to experiment 

A Cloud Manager provides significant value here.  Using one to unify the management of a lab environment allows the rapid integration of new technologies—technologies that your IT teams need to learn and gain experience with before implementing in the production environment.  Using a Cloud Manager eases the introduction of these technologies, and unified the management interface to make administration more predictable. These tools together help to mold processes and the IT organization into a more agile group.

In my mind, one of the core issues here is that too few IT teams are able to think outside of the box when it comes to implementing new tech. If greenfield implementations are easier than shoehorning new tech into your existing stack, why not give it a shot? Starting with a small base of gear and intelligently growing the installation over time is a great way to migrate capacity. I have an entire different blog post on how to migrate via attrition that is coming soon.  In the meantime, go ahead and identify a few pieces of hardware, install your preferred virtualization tool, download CloudBolt C2, and start piecing together your future architecture.  Once C2 is installed, you’ll be able to quickly layer in additional technologies like Data Center Automation, Network Virtualization, and even other virtualization or Public Cloud resources seamlessly.  

Happy integrating!

Read More

Topics: Virtualization, New Technology, Cloud Manager, Challenges, Implementation, Vendors, Development, Hardware

Cloud Managers Will Change IT Forever

Posted by John Menkart

2/20/13 10:37 AM

In numerous conversations with customers and analysts it has become clear that a consensus across the industry is that Cloud Managers are as game changing for IT as server and network virtualization themselves.  Among those looking longer term at the promise of Cloud Computing (Public, Private and Hybrid), it is clear that the Cloud Manager will become the keystone of value.  Many people’s opinion is that Cloud Managers are the initiator of next major wave of change in IT environments.  How?  Well let’s look to the past to predict the future.

Proprietary Everything

Back in the early 80’s, general purpose computers were first spreading across the business environment. These systems were in the form of fully-proprietary Mainframes, Minicomputers. The hardware (CPU, Memory, Storage, etc.), Operating Systems and any even any available software were all from the specific computer manufacturer (vendors included DEC, Prime, Harris, IBM, HP, DG amongst others).  Businesses couldn’t even acquire compilers for their systems from a third party.  They were only available from the system’s manufacturer.

Commodity OS leads to Commodity Hardware

Agility and maturity of IT environments step 1

The advent of broad interest and adoption of Unix started a sea change in the IT world.  As more hardware vendors supported Unix it became easier to migrate from one vendor’s system to another.  Additionally, vendors began building their systems based on commodity x86-compatible microprocessors as opposed to building proprietary CPU architectures optimized around their proprietary OS.

Architecture-compatible hardware not only accelerated the move to commodity OS (Unix, Linux and Windows), but in turn, increased pressure on vendors to fully commoditize server hardware.  The resulting commoditization of hardware systems steeply drove down prices.  To this day, server hardware largely remains a commodity.

Virtualization Commoditizes Servers

 

Agility and maturity of IT environments step 2

Despite less expensive commodity operating systems and commodity hardware, modernizing enterprise IT organizations were still spending large sums on new server hardware in order to accommodate the rapidly growing demand of new applications.  In large part, IT organizations had a problem taking full advantage of the hardware resources they are spending on.  Server utilization become a real issue.  Procurement of servers still took a considerable amount of time due to organizational processes.  Every new server required a significant amount of effort to purchase, rack and stack, and eventually deploy.  Power and cooling requirements became a significant concern.  The integration of storage, networking, and software deployment and maintenance still caused considerable delays into workflows that are reliant on new hardware systems.

Server virtualization arrives commercially in the late 1990’s and starts getting considerable traction in the mid 2000’s.  Virtualization of the underlying physical hardware provides an answer to the thorny utilization issue by enabling multiple individual server workloads that have low individual utilization to be consolidated on a single physical server.  Virtualization also provides a limited  solution for the  the procurement problem, and helps with the power and cooling issues posed by rampant hardware server growth. Areas of networking, storage, and application management remain disjointed, and typically still require similar times to effectively implement as before the advent of virtualization thus becoming a major impediment to flexibility in the enterprise IT shops.

Now we find ourselves in 2013.  Most enterprise IT shops have implemented some level of virtualization. All of the SaaS and Cloud-based service providers have standardized on virtualization. Virtual servers can be created rapidly and at no perceived cost other than associated licenses, so VM Servers are essentially a commodity, although the market share for the underlying (enabling) technology is clearly in VMware’s favor at this point.

The problem with these commodity VM servers is that making them fully available for use still hinges on integrating them with other parts of the IT environment that are far from commodity and complex to configure.  The VM’s dependency on network, automation tools, storage, etc. hinder the speed and flexibility of the IT group to configure and provide rapid access to these resources for the business.

Network Virtualization arrives

A huge pain point in flexibly deploying applications and workloads is the result of networking technology still being largely based on the physical configuration of network hardware devices across the enterprise. The typical enterprise network is both complex and fragile, which is a condition that dos not encourage rapid change in the network layer to accommodate business or mission application requirements. An inflexible network which is available is always preferred to a network that failed because of unintended consequences of a configuration change.

In much the same way as Server Virtualization abstracted the server from the underlying hardware, Network virtualization completely abstracts the logical network from the physical network.  Using network virtualization it is now possible to free the network configuration from the physical devices, enabling rapid deployment of new, and more efficient management of existing virtual networks.  Rapid adoption of network virtualization technology in the future is all but guaranteed.

Commoditizing all IT resources and compute

 

Agility and maturity of IT environments step 3

With both network and server virtualization, we are closer than ever to the real benefit of 'Cloud Computing': the promise of  fully commoditized IT resources and compute.  To get there, however, we need to coordinate and abstract the management and control the modern enterprises’ internal IT resources and compute resources being consumed in external public cloud providers.

To enable rapid and flexible coordination of the IT resources, the management of those enterprise application resources must be abstracted from the underlying tools.  The specific technologies (server virt, network virt, automation, storage, public cloud provider, etc.) involved are viewed as commodity, and can be exchanged or deprecated without negatively affecting the business capabilities of the enterprise IT. Additionally this abstraction allows the IT organization to flexibly adopt new and emerging technologies to add functionality and capability without exposing the business to the often sharp edges leading edge technology.

The necessary resource abstraction and control is the domain of the not just the virtualization manager-- but really the Cloud Manager. In short, the Cloud Manager commoditizes compute by commoditizing the IT resources across the enterprise and beyond.

With such an important role it is no wonder that every vendor wants to pitch a solution in this space. The orientation or bias of the various vendors’ approaches in developing a Cloud Manager for enterprise IT will play a critical role in the ultimate success of the products and customers that implement them.

Read More

Topics: Network Virtualization, IT Challenges, Virtualization, Cloud Manager, John, Enterprise, IT Organization, Agility, Compute, Hardware

VM Sprawl’s Effects on the Processor/Performance Curve is Significant

Posted by Justin Nemmers

2/8/13 1:24 PM

Over at Information Week, Jim Ditmore discusses how advances in CPU power efficiency will eventually save businesses significant data center costs.

It’s certainly a compelling case, but there’s an assumption being made—that VM count will not grow at the same pace as the gains in hardware efficiency.

Many customers I speak with are certainly excited about the prospects of more efficient data centers, both in terms of CPU performance and efficiency.  One common problem they’re butting up against, however, is that of VM sprawl. Unused or under-utilized VMs in an environment have a significant impact on the overall efficiency numbers that an IT organization can expect to see.  If VM count increases at the same rate as the processor/efficiency curve, then the net result will be as it is now: the amount of required hardware to sustain the existing load will increase. 

To his credit, Jim comes close to calling this point out: 

“You'll have to employ best practices in capacity and performance management to get the most from your server and storage pools, but the long-term payoff is big. If you don't leverage these technologies and approaches, your future is the red and purple lines on the chart: ever-rising compute and data center costs over the coming years.”

efficiency power vm sprawl effects
Efficiency doesn't matter when VM Sprawl consumes additional capacity provided by more powerful and efficient CPUs.

But that’s still making the assumption that It organizations are well-prepared to effectively solve the issue of VM sprawl.  For many of the customers I work with, that’s a pretty big assumption.  IT Organizations are well aware of the impact of sprawl, but have few tools to effectively combat it in a reliable and consistent manner.  Additionally, the sustained effort required to maintain a neat-and-tidy virtualization environment (at least regarding sprawl) is often great, placing much pressure on an IT organization that’s likely already seen as lacking agility and responsiveness to the business.

The default solution, which of course is rife with issues, to this struggle is well known, and relatively easy:  Throw more hardware at the problem.  Or push workloads to the public cloud.  Either way, it’s just a Band-Aid, at best and does nothing to contain costs into the future.

The only way for IT organizations to benefit in an ongoing basis from the processor performance/efficiency curve is to effectively control sprawl in the virtual environment.

And how does one do that?  With a Cloud Manager like CloudBolt C2.

 

Get C2 Product Overview
Read More

Topics: Public Cloud, IT Challenges, Virtualization, Cloud Management, Agility, Hardware