CloudBolt Software Logo

CloudBolt Blog

You are not expected to understand this

Posted by Ephraim Baron

8/12/15 10:30 AM

I love the history of technology.  My favorite place in Silicon Valley is the Computer History Museum.  It’s a living timeline of computing technology, where each of us can find the point when we first joined the party.

It’s great to learn about technology pioneers – the geek elite.  Years ago I took a course on computer operating systems.  We were studying the evolution of UNIX, and we’d gotten to Lions’ Commentary on UNIX 6th Edition, circa 1977.  (As an aside, the entire UNIX operating system at that time was less than 10,000 lines of code.  By 2011 the Linux kernel alone required 15 million lines and 37,000 files.)  As we studied the process scheduler section, we came to one of the great “nerdifacts” of computer programming, line 2238, a comment which reads:

* You are not expected to understand this.

Daunting technology

That one line perfectly expresses my joys and frustrations with computing.  The joy comes from the confirmation that computers can do amazingly clever things.  The frustration is from the dismissive way I’m reminded of my inferiority.  And I think that sums up how most people feel about technology.

“Your call is important to us. Please continue to hold.”

In the corporate world, end users have a love-hate relationship with their IT departments.  It’s true that they help us to do our jobs.  But rather than giving us what we need, when we need it, our IT folks seem to always be telling us why our requests cannot be fulfilled.  Throughout my career I’ve been on both sides of this conversation.  Early on, I was the requester/supplicant who’d make my pleas to IT for services or support, only to be told to go away and come back on a day that didn’t end in ‘y’.  

notYes

Later, I was the IT administrator, then manager.  In those roles I was the person saying ‘no’ – far more often than I wanted.  It wasn’t because I got perverse pleasure out of disappointing people.  That was just the way my function was structured, measured, and delivered.

Almost without exception, the two metrics that drove my every action in IT operations were cost and uptime.  Responsiveness and customer satisfaction were not within my charter.  Simply put, I got no attaboys for doing things quickly.  While this certainly annoyed my customers, they knew and I knew that they had no alternatives.

The Age of Outsourcing

Things began to change in the late 1980’s and early 1990’s (yeah, I go back a ways) when large companies decided to try throwing money at their IT problems to make them go away.  So began the age of IT outsourcing, when companies tried desperately to disown in-house computer operations.  Such services were “outside of our core competency”, they reasoned, and so were better performed by seasoned professionals from large companies with three-letter names like IBM, EDS, and CSC.

Outsourcing question

Fast-forward 25 years and we find the IT outsourcing (ITO) market in decline.  There are many reasons for this.  The most common are:

  • Actual savings are often far less than projected
  • Long-term contracts limit flexibility, particularly in a field that changes as constantly as IT
  • There is an inherent asymmetry of goals between service provider and service consumer
  • Considerable effort is required to manage and monitor contracts and SLA compliance
  • New technologies like cloud computing offer viable alternatives

Just as video killed the radio star, cloud computing is a fresher, sexier alternative to ITO for enterprises searching for the all-important “competitive advantage”.

Power to the People!

Cloud computing isn’t just new wine in old bottles; it’s a fundamental change in the way computing resources are made available and consumed.  Cloud computing focuses on user needs (the ‘what’) rather than underlying technology (the ‘how’).

The National Institute of Standards and Technology (NIST) defines five essential characteristics of cloud computing.  One of these is ‘On-demand self-service’.  Think about what that means.  For the end user, it means getting what we need, when we need it.  For business, it means costs that align with usage, for services that make sense.  And for IT, it means being able to say ‘yes’ for a change.NIST cloud model

For too long, we have been held captive by technology.  Cloud computing promises to free us from technology middlemen.  It enables us to consume services that we value.

At its core, cloud computing is technology made understandable.

CloudBolt is a cloud management platform that enables self-service IT.  It allows IT organizations to define ready-to-use systems and environments, and to put them in the hands of their users.  Isn’t that a welcome change?

Learn more about self-service IT

Read More

Topics: Customer, Cloud, Services, Agility, IT Self Service, Self Service IT

Three ways to prevent IT complexity from hindering cloud computing

Posted by Justin Nemmers

8/26/14 3:49 PM

Is IT environment complexity standing in the way of your ability to make better use of cloud computing technologies?  You’re not alone.

My daily conversations with prospects frequently have an undertone: “we’ve got a complexity problem,” they’re saying. Often, these IT organizations are not merely looking for software to help bridge this gap, but are looking for ways to help strategically alter the direction of IT at their business. Ideally, doing so in ways that help them to reduce complexity, unwinding a bit of the tangle that they have created in order to solve problems for which no single-package solution existed at the time.

IT Complexity makes implementing cloud more challenging
Successful IT organizations also tend to be ones that implement simpler solutions.

Cloud computing infrastructure technologies themselves are not necessarily simple, but the ways that IT organizations interface with them often are very well understood and defined. IT organizations want to move away from existing methods of end user access, and toward a more seamless, integrated (i.e. cloud-like) look and feel to their IT enterprise. Ironically, the very complexity that organizations want to solve with cloud-backed technologies becomes a relatively large chasm that must be crossed in order to be successful. The only real answer to this problem is a game-changing approach to how solutions are designed, implemented, and procured.

There are three ways IT organizations can help bridge the complexity chasm in their environments:

Reduce risk with simple solutions

IT risk is incurred when a project requires a significant investment of time and/or money, and has a chance of failing to meet the original business need. The more expensive and time consuming a project is, the higher the risk should it ultimately fail. For this reason, reducing the time and cost required to implement a solution can significantly reduce the risk of that solution. Restated, simple solutions that can be rapidly vetted, installed, configured, and put to use by the business reduce risk by saving time. Restated again, don’t be afraid to fail fast. 

Avoid typical enterprise software buying cycles

With the swipe of a credit card, IT consumers compete with their IT organizations by access a multitude of resources. Shadow IT is certainly costly, but decision makers should take note not just of the technologies their users are purchasing, but also how they’re purchasing them. Look for products that provide needed capability, but that also allow you to break out of the traditional negotiate a huge contract and pricing mechanism (only to have to be repeated in a year). These buying cycles are at odds with the ease-of-access expected with cloud.

Select technologies that ease troubleshooting

Effective troubleshooting is a challenging skill to master, yet complex solutions absolutely require this to be the most developed of an administrator’s skillset. Why is it, then, that many enterprise technologies pile on the complexity in ways that force organizations to rely even more on their staff’s troubleshooting skills? Selecting tools that are able to short-circuit long workflow dependency chains will help IT teams unwind some of the complexity inherent to solving challenging IT needs. For instance, an orchestration event constructed in a hub and spoke model is far easier to diagnose than a branched linear process, as there’s a common point of reference that can indicate exactly what, where, and why a process failed.

In summary, there are frequently many possible solutions to nearly every technical problem, but although they may solve your initial problem, those that are needlessly complex are more likely to create a pile of their own. Conversely, technical solutions that are simple tend to show value quite quickly, enabling the IT team to field a significant quick-win technology to grumpy end users.

Reducing overall complexity in the IT environment removes barriers to new technology adoption, including cloud, and is a critical success requirement on the journey to becoming a more agile IT enterprise.

Need a cloud manager, but scared of the complexity presented by other solutions? Look no further than CloudBolt. Request a download today, and you'll join our happy customers in saying "CloudBolt's power is in its simplicity."
Schedule a Demo
or try it yourself
Read More

Topics: IT Challenges, Agility, IT Self Service

Why Manual VM Provisioning Workflows Don't Work Anymore

Posted by Justin Nemmers

3/25/13 10:40 AM

Let’s look through a fictional situation that likely hits a little close to home.

An enterprise IT shop receives a developer request for a new server resource that is needed for testing. Unfortunately, the developer request doesn’t include all of the information needed to provision the server, so the IT administrator goes back to the developer, and has a discussion about the number of CPUs, and the amount of RAM and storage are needed. That email back-and-forth takes a day. Once that conversation is complete, the IT admin creates a ticket and begins the largely manual workflow of provisioning a server. First, the ticket is assigned to the storage team to create the required storage unit. The ticket is addressed in two days, and then passed on to the network team who ensures that the proper VLANs are created and accessible, and to assign IP addresses. The network team has been pretty busy, though, so their average turnaround is greater than four days. Then, the ticket is handed off back to the virtualization team, where the instance is provisioned, but not until two days later. Think it’s ready to hand off to the user yet?  Not yet.

manual provisioning, workflows, old-school assembly lineAn assembly-line model cannot deploy VMs as rapidly as needed. Automation is required.

The team that manages the virtual environment and creates the VMs is not responsible for installing software. The ticket is forwarded along to the software team, who, three days later, manually installs the needed software on that system, and verifies operation. The virtual server is still not ready to hand off to the developer, though!

You see, there’s also a security and compliance team as well, so the ticket gets handed off to those folks, who a few days later, run a bunch of scans and compliance tests. Now that the virtual resource is in it’s final configuration, it’s got to be ready, right?  Nope. It gets handed off to the configuration management team who then must thoroughly scan the system in order to create a configuration instance in the Configuration Management Database (CMDB). Finally, the instance is finally ready to be delivered to the developer that requested.

The tally is just shy of three full business weeks. What has the developer been doing in the meantime?  Probably not working to his or her full capacity.

Circumventing IT Completely with Shadow IT

Or, maybe that developer got tired of waiting, and after two days went around the entire IT team and ordered an instance from AWS that took five minutes to provision. The developer was so excited about getting a resource that quickly that they bragged to the fellow developers, who in turn start to use AWS.

Negative Effects on IT and the Business

Either way, this is a scenario that plays out repeatedly, and I’m amazed at how frequently it plays out just like this. The result might initially appear to just be some shadow IT, or maybe some VM sprawl from unused deployed instances, however, the potential damage to both the IT organization and the business is far greater.

First, users frequently circumventing the IT organization looks bad. These are actions that question the IT organization’s ability to effectively serve the business, and thus strike at the very heart of the IT group’s relevance.

Furthermore, the IT Consumers are the business. Ensuring that users have access to resources in near-real time should be a goal of every IT org, but rapidly adjusting and transforming the IT teams and processes doesn’t work as quickly as demand changes. This means that the IT org cannot respond with enough agility to continually satisfy the business needs, which in turn potentially means more money is spent to provide less benefit, or even worse, the business misses out on key opportunities.

IT shops need to move beyond simple virtualization and virtualization management. Why? Improved virtualization management does not solve all of the problems presented in the scenario above, while (and this is key here) also providing for continued growth. Implementing tools that only manage virtualization only solve part of the problem, because they are unable to properly unify the provisioning process around software (by going beyond plain template libraries with tools like HPSA, Puppet or Chef), and other external mechanisms (like a CMDB). In order to fully modernize and adapt existing processes and teams to a cloud/service oriented business model, all aspects of the provisioning process must be automated. It’s the only way an IT organization can hope to stay responsive enough, and avoid being locked into one particular solution, such as a single-vendor approach to virtualization. A well-designed and implemented Cloud Manager will give an IT org the freedom to choose the best underlying technology for the job, without regard for how it will be presented to the end user.

Either way you look at it, IT organizations need a solution which will allow them to utilize as much of their existing assets as possible while still providing the governance, security, and serviceability needed to ensure the company’s data and services are well secured and properly supported.

The Solution

Thankfully, there’s just such a Cloud Manager. CloudBolt C2 is built by a team with decades of combined experience in the systems management space, and was created from the beginning to solve this exact problem. Because we started from the first line of code to solve this entire problem, we call ourselves the next-generation cloud manager, but out customers call it a game changer. Give it a download and effortless install today, and we’ll show you that CloudBolt C2 mean business.

Read More

Topics: Customer, IT Challenges, Management, Virtualization, Cloud Manager, Shadow IT, Agility

Cloud Managers Will Change IT Forever

Posted by John Menkart

2/20/13 10:37 AM

In numerous conversations with customers and analysts it has become clear that a consensus across the industry is that Cloud Managers are as game changing for IT as server and network virtualization themselves.  Among those looking longer term at the promise of Cloud Computing (Public, Private and Hybrid), it is clear that the Cloud Manager will become the keystone of value.  Many people’s opinion is that Cloud Managers are the initiator of next major wave of change in IT environments.  How?  Well let’s look to the past to predict the future.

Proprietary Everything

Back in the early 80’s, general purpose computers were first spreading across the business environment. These systems were in the form of fully-proprietary Mainframes, Minicomputers. The hardware (CPU, Memory, Storage, etc.), Operating Systems and any even any available software were all from the specific computer manufacturer (vendors included DEC, Prime, Harris, IBM, HP, DG amongst others).  Businesses couldn’t even acquire compilers for their systems from a third party.  They were only available from the system’s manufacturer.

Commodity OS leads to Commodity Hardware

Agility and maturity of IT environments step 1

The advent of broad interest and adoption of Unix started a sea change in the IT world.  As more hardware vendors supported Unix it became easier to migrate from one vendor’s system to another.  Additionally, vendors began building their systems based on commodity x86-compatible microprocessors as opposed to building proprietary CPU architectures optimized around their proprietary OS.

Architecture-compatible hardware not only accelerated the move to commodity OS (Unix, Linux and Windows), but in turn, increased pressure on vendors to fully commoditize server hardware.  The resulting commoditization of hardware systems steeply drove down prices.  To this day, server hardware largely remains a commodity.

Virtualization Commoditizes Servers

 

Agility and maturity of IT environments step 2

Despite less expensive commodity operating systems and commodity hardware, modernizing enterprise IT organizations were still spending large sums on new server hardware in order to accommodate the rapidly growing demand of new applications.  In large part, IT organizations had a problem taking full advantage of the hardware resources they are spending on.  Server utilization become a real issue.  Procurement of servers still took a considerable amount of time due to organizational processes.  Every new server required a significant amount of effort to purchase, rack and stack, and eventually deploy.  Power and cooling requirements became a significant concern.  The integration of storage, networking, and software deployment and maintenance still caused considerable delays into workflows that are reliant on new hardware systems.

Server virtualization arrives commercially in the late 1990’s and starts getting considerable traction in the mid 2000’s.  Virtualization of the underlying physical hardware provides an answer to the thorny utilization issue by enabling multiple individual server workloads that have low individual utilization to be consolidated on a single physical server.  Virtualization also provides a limited  solution for the  the procurement problem, and helps with the power and cooling issues posed by rampant hardware server growth. Areas of networking, storage, and application management remain disjointed, and typically still require similar times to effectively implement as before the advent of virtualization thus becoming a major impediment to flexibility in the enterprise IT shops.

Now we find ourselves in 2013.  Most enterprise IT shops have implemented some level of virtualization. All of the SaaS and Cloud-based service providers have standardized on virtualization. Virtual servers can be created rapidly and at no perceived cost other than associated licenses, so VM Servers are essentially a commodity, although the market share for the underlying (enabling) technology is clearly in VMware’s favor at this point.

The problem with these commodity VM servers is that making them fully available for use still hinges on integrating them with other parts of the IT environment that are far from commodity and complex to configure.  The VM’s dependency on network, automation tools, storage, etc. hinder the speed and flexibility of the IT group to configure and provide rapid access to these resources for the business.

Network Virtualization arrives

A huge pain point in flexibly deploying applications and workloads is the result of networking technology still being largely based on the physical configuration of network hardware devices across the enterprise. The typical enterprise network is both complex and fragile, which is a condition that dos not encourage rapid change in the network layer to accommodate business or mission application requirements. An inflexible network which is available is always preferred to a network that failed because of unintended consequences of a configuration change.

In much the same way as Server Virtualization abstracted the server from the underlying hardware, Network virtualization completely abstracts the logical network from the physical network.  Using network virtualization it is now possible to free the network configuration from the physical devices, enabling rapid deployment of new, and more efficient management of existing virtual networks.  Rapid adoption of network virtualization technology in the future is all but guaranteed.

Commoditizing all IT resources and compute

 

Agility and maturity of IT environments step 3

With both network and server virtualization, we are closer than ever to the real benefit of 'Cloud Computing': the promise of  fully commoditized IT resources and compute.  To get there, however, we need to coordinate and abstract the management and control the modern enterprises’ internal IT resources and compute resources being consumed in external public cloud providers.

To enable rapid and flexible coordination of the IT resources, the management of those enterprise application resources must be abstracted from the underlying tools.  The specific technologies (server virt, network virt, automation, storage, public cloud provider, etc.) involved are viewed as commodity, and can be exchanged or deprecated without negatively affecting the business capabilities of the enterprise IT. Additionally this abstraction allows the IT organization to flexibly adopt new and emerging technologies to add functionality and capability without exposing the business to the often sharp edges leading edge technology.

The necessary resource abstraction and control is the domain of the not just the virtualization manager-- but really the Cloud Manager. In short, the Cloud Manager commoditizes compute by commoditizing the IT resources across the enterprise and beyond.

With such an important role it is no wonder that every vendor wants to pitch a solution in this space. The orientation or bias of the various vendors’ approaches in developing a Cloud Manager for enterprise IT will play a critical role in the ultimate success of the products and customers that implement them.

Read More

Topics: Network Virtualization, IT Challenges, Virtualization, Cloud Manager, John, Enterprise, IT Organization, Agility, Compute, Hardware

CIOs Must Learn From Shadow IT

Posted by Justin Nemmers

2/12/13 2:18 PM

Michael Grant over at CloudScaling.com penned a pretty interesting article asking what CIOs can learn from Shadow IT.  Dave Linthicum came to a similar conclusion in a blog post back in August. I think that the most interesting part of the article is the claim that “Shadow IT is less a threat, but more of a positive force for changing the way IT is delivered in the enterprise.”

I’m not sure that many CIOs would tend to agree with that statement, if for no other reason that the risk to business be it real or perceived.  CIOs certainly see what can be done, but the implementation of a public cloud-like model often stands in juxtaposition from how their IT Organization was built to operate.  It’s not just a drastic change to the technology model, but also a groundbreaking adjustment to how the team operates on a day-to-day basis.  For that reason alone, it’s not as easy as just deciding to alter the resource provisioning and request model.  There are real tools that are needed, and few are able to effectively offer the needed capabilities without actually replacing existing technologies.

Shadow IT CIO IT Organizations
CIOs need to learn how to benefit from the decisions made by shadow IT.

As Michael correctly claims, strategic CIOs absolutely get that the opportunity for cloud computing lies not just with the technology, but also with the technology’s ability to enable IT organization transformation to be more responsive to the business. Organizations can even create new revenue opportunities.  Selecting the correct tools to enable this transformation is the key.

CIOs need to select the tools that allow them to both most fully leverage existing capabilities and expertise.  There is little value migrating away from proven technologies that organizations have already spent significant sums procuring and implementing.  Of course, I’m talking about a Cloud Manager here.  A good one needs to integrate with, not replace existing technology.  Once a Cloud Manager is deployed, CIOs will have the flexibility needed to make additional technology selections.  Want to implement OpenStack?  Want to pull in entire application stacks and present them as PaaS?  How about leveraging an updated configuration management/data center automation tool?  No sweat.  The right Cloud Manager helps CIOs get there.  

Finally, what I will agree with, however, is the notion that CIOs can learn from the delivery models of Public Cloud-based compute, in order to alter their way of doing business.  In fact, they have to.  IT organizations are under an amazing amount of pressure to perform.  Even in organizations that have effectively curtailed the usage of shadow IT, the IT organization just looks bad when it takes them three weeks (or more!) to deploy a server for use by someone in the business.

Linthicum sums it up pretty well: 

“I do not advocate that IT give up control and allow business units to adopt any old technology they want. However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." 

Cloud has the ability to drastically alter that model.  In a way, IT organizations can’t just get out of the way and let their users do whatever they want, but if they don’t learn from those cues, they’ll need to find other employment.

With an effective Cloud Manager, such as CloudBolt C2, CIOs can present the entirety of their virtualization resources as private cloud, and enable public cloud resource consumption as well, all while ensuring that IT management has control over governance, and total visibility into the cost impacts of various deployments.  For CIOs that wish to remain relevant, it’s a must.

 

Get C2 Security Data Sheet
Read More

Topics: Customer, Public Cloud, Shadow IT, Agility, CIO

VM Sprawl’s Effects on the Processor/Performance Curve is Significant

Posted by Justin Nemmers

2/8/13 1:24 PM

Over at Information Week, Jim Ditmore discusses how advances in CPU power efficiency will eventually save businesses significant data center costs.

It’s certainly a compelling case, but there’s an assumption being made—that VM count will not grow at the same pace as the gains in hardware efficiency.

Many customers I speak with are certainly excited about the prospects of more efficient data centers, both in terms of CPU performance and efficiency.  One common problem they’re butting up against, however, is that of VM sprawl. Unused or under-utilized VMs in an environment have a significant impact on the overall efficiency numbers that an IT organization can expect to see.  If VM count increases at the same rate as the processor/efficiency curve, then the net result will be as it is now: the amount of required hardware to sustain the existing load will increase. 

To his credit, Jim comes close to calling this point out: 

“You'll have to employ best practices in capacity and performance management to get the most from your server and storage pools, but the long-term payoff is big. If you don't leverage these technologies and approaches, your future is the red and purple lines on the chart: ever-rising compute and data center costs over the coming years.”

efficiency power vm sprawl effects
Efficiency doesn't matter when VM Sprawl consumes additional capacity provided by more powerful and efficient CPUs.

But that’s still making the assumption that It organizations are well-prepared to effectively solve the issue of VM sprawl.  For many of the customers I work with, that’s a pretty big assumption.  IT Organizations are well aware of the impact of sprawl, but have few tools to effectively combat it in a reliable and consistent manner.  Additionally, the sustained effort required to maintain a neat-and-tidy virtualization environment (at least regarding sprawl) is often great, placing much pressure on an IT organization that’s likely already seen as lacking agility and responsiveness to the business.

The default solution, which of course is rife with issues, to this struggle is well known, and relatively easy:  Throw more hardware at the problem.  Or push workloads to the public cloud.  Either way, it’s just a Band-Aid, at best and does nothing to contain costs into the future.

The only way for IT organizations to benefit in an ongoing basis from the processor performance/efficiency curve is to effectively control sprawl in the virtual environment.

And how does one do that?  With a Cloud Manager like CloudBolt C2.

 

Get C2 Product Overview
Read More

Topics: Public Cloud, IT Challenges, Virtualization, Cloud Management, Agility, Hardware

Gartner Research & Linthicum: CIO’s Need to Deploy Cloud Management

Posted by Justin Nemmers

1/24/13 12:51 PM

In a recent posting, David Linthicum discusses how a Gartner survey reports CIOs are saying more now than ever that cloud is a top priority.  He continues to say these same CIOs are at risk if they’re not moving their organizations toward Cloud Computing.  As he says “No surprise there.”

CIO House For Sale

CIOs - Either figure out a way to leverage cloud technology, or get into real estate

 

I think that there are several thoughts worth digging into a little more:

  • The average CIO’s IT organization is under a full-frontal assault by public cloud technologies which show users that a highly agile IT organization is not only possible, but it’s happening right now. 
  • Even if a business is not investigating or actively using public cloud, internal users still understand how quickly they should be able to get new resources delivered.
  • Groups that already have either public cloud deployments, or public cloud/other internal deployments, the IT organization’s ability to rapidly deliver new resources is key.  Over time, those organizations will look more like broker/providers as they gain significant agility from structural changes, and will be able to support both public and private deployments based on what’s best for the requested workload.
  • Any realistic cloud deployment plan has to include updates to process and procedures—i.e. you have to modify the organizational structure to be successful. 

It’s great that CIOs are (again) making a verbal commitment to investigate and implement cloud technologies, but as Linthicum says, “I suspect some CIOs did not respond to the Gartner survey honestly and will continue to kick plans to develop a cloud strategy further down the road.”

So how do you even get started?  My recommendation to CIOs:  Start by identifying some low-hanging fruit.  Deploy a Cloud Management Platform technology that enables cloud services such as IaaS and PaaS using your existing technology pool.  Then, pick a particularly savvy part of your user base, and push them into a IaaS/PaaS model using a modicum of surplus resources and this new technology.  As you work out the kinks, expand the project to cover more groups and workloads.  It’s a winning model, and I believe that many of these groups will find it considerably easier implement or expand cloud implementations and projects. 

In the end, Dave’s got it right…  Cloud might seem difficult, but I’m guessing that the real estate market is tougher.

Read More

Topics: IT Organization, People, Agility, CIO