CloudBolt Software Logo

CloudBolt Blog

Game of Clouds

Posted by Colin Thorp

6/22/15 2:37 PM

There is a war among the clouds, the public and private providers are fighting to see who will reign supreme.  Public vs Private, Public vs Public, and Private vs Private. It is chess match to see which vendor(s) will capture the largest share of the market.  This non-stop battle has made it  more challenging for the customer, the normal people, to make the best choice.

Large companies have ruled the land for years: VMware, Red Hat, and others have their stake in the ground. In recent years AWS, GCE, OpenStack and Azure have established control and are now eating away the edges of the market; corrupting the business models of traditional vendors. In a market that is redefining itself every year, how do you make the right choice? First you have to look at the state of the clouds:

Public

The public clouds are simple and easy to use. Public cloud’s destiny is manifested by pushing at our most basic needs: gain control, lower costs, increase speed, and deliver simplicity. However, public providers don’t want you to know how much you will owe them until after the fact, post billing. You don’t get locked in, but clearly the goal is to be the stickiest product that you use with their growing toolset.  The on-premise security you need might not be provided by a public cloud which creates a need for a private cloud with the ability to move between the clouds.

Private

The private/internal cloud providers will reel you in continually attack your budgets with their ever expanding set of services. Layering in so many different tools from their “suites” that you are never quite sure which tools you are using, which tools you have bought, or which you are being charged for. Be wary, these vendors lock customers in at the root level of your infrastructure to the point where you’ll have no choice, but to renew, renew, and renew. There are so many varying levels of integration between these tools that it becomes complex and hard to manage, forcing you to buy professional services. More professional services means more money, and the vicious cycle continues.

So who is the right choice?

The answer is a hybrid approach. For various reasons, maybe it is cost or security or ease of use or vendor lock-in, you will come to use a variety of these tools and they will continue to challenge one another. They will have tunnel vision with one goal in mind: how can we lock our customers in with an vendor specific set of products.  This makes none of them fit to rule the throne, so whose turn is it?

It is time for “choice” to be your weapon. Come above the clouds to be the broker, the king of clouds and give yourself the choice. Enough is enough with vendors ruling you! Take control of the clouds and manage them. Claim your place on the throne by putting the power of the clouds in the hands of your people so they can manage their own IT resources without getting caught up in the fog of vendor war.

So you want to sit on the throne?

To lay claim to the throne is to be the “broker of clouds” above the fray.  End users must be happy, if they aren’t happy you will know and hear about it. Users are ok with paying an IT team to be their broker as long as resources are delivered quickly and correctly. Users care that the job is done, not how you do it. Private and Public clouds have become a commodity, it is time to make the delivery of this commodity readily available. Waiting hours, days or weeks to get commodity resources is no longer sufficient.

When you look at what is preventing private and public clouds from being readily available, you see the following issues: complexity of multiple UIs, slow provisioning, IT overwhelmed with tickets, inability to track costs between clouds, and VM sprawl. IT is spending so much time servicing complexity that they can’t service their users. Solution? Simplicity.

Simplicity is the Vaccine for Complexity

Kings and Queens can’t do it on their own, they need an ally. A tool that reigns above the clouds; a Cloud Delivery Platform that provides you the nimbleness, flexibility, and agility that you need.  Give users a simple intuitive interface that eliminates multiple UIs and gives users the single portal that spans the entirety of your realm of clouds. If you are truly going to lead your users, public, and private clouds, then for every resource you need to know: Who owns it, What is it doing, Where is it, When does it expire, Why does it exist, and How much does it cost. CloudBolt is a vendor agnostic tool that is worthy of the title “Hand of the King/Queen.”

Conclusion

By this point if you’ve read this far you must be somewhat interested. Reach out, schedule a demo, and see how a cloud delivery platform like CloudBolt will put you on the path to the throne and bring the convenience of the clouds to all of your users.

Read More

Topics: Public Cloud, IT Challenges, Cloud Management, Private Cloud, Self Service IT, Hybrid Cloud

Three ways to prevent IT complexity from hindering cloud computing

Posted by Justin Nemmers

8/26/14 3:49 PM

Is IT environment complexity standing in the way of your ability to make better use of cloud computing technologies?  You’re not alone.

My daily conversations with prospects frequently have an undertone: “we’ve got a complexity problem,” they’re saying. Often, these IT organizations are not merely looking for software to help bridge this gap, but are looking for ways to help strategically alter the direction of IT at their business. Ideally, doing so in ways that help them to reduce complexity, unwinding a bit of the tangle that they have created in order to solve problems for which no single-package solution existed at the time.

IT Complexity makes implementing cloud more challenging
Successful IT organizations also tend to be ones that implement simpler solutions.

Cloud computing infrastructure technologies themselves are not necessarily simple, but the ways that IT organizations interface with them often are very well understood and defined. IT organizations want to move away from existing methods of end user access, and toward a more seamless, integrated (i.e. cloud-like) look and feel to their IT enterprise. Ironically, the very complexity that organizations want to solve with cloud-backed technologies becomes a relatively large chasm that must be crossed in order to be successful. The only real answer to this problem is a game-changing approach to how solutions are designed, implemented, and procured.

There are three ways IT organizations can help bridge the complexity chasm in their environments:

Reduce risk with simple solutions

IT risk is incurred when a project requires a significant investment of time and/or money, and has a chance of failing to meet the original business need. The more expensive and time consuming a project is, the higher the risk should it ultimately fail. For this reason, reducing the time and cost required to implement a solution can significantly reduce the risk of that solution. Restated, simple solutions that can be rapidly vetted, installed, configured, and put to use by the business reduce risk by saving time. Restated again, don’t be afraid to fail fast. 

Avoid typical enterprise software buying cycles

With the swipe of a credit card, IT consumers compete with their IT organizations by access a multitude of resources. Shadow IT is certainly costly, but decision makers should take note not just of the technologies their users are purchasing, but also how they’re purchasing them. Look for products that provide needed capability, but that also allow you to break out of the traditional negotiate a huge contract and pricing mechanism (only to have to be repeated in a year). These buying cycles are at odds with the ease-of-access expected with cloud.

Select technologies that ease troubleshooting

Effective troubleshooting is a challenging skill to master, yet complex solutions absolutely require this to be the most developed of an administrator’s skillset. Why is it, then, that many enterprise technologies pile on the complexity in ways that force organizations to rely even more on their staff’s troubleshooting skills? Selecting tools that are able to short-circuit long workflow dependency chains will help IT teams unwind some of the complexity inherent to solving challenging IT needs. For instance, an orchestration event constructed in a hub and spoke model is far easier to diagnose than a branched linear process, as there’s a common point of reference that can indicate exactly what, where, and why a process failed.

In summary, there are frequently many possible solutions to nearly every technical problem, but although they may solve your initial problem, those that are needlessly complex are more likely to create a pile of their own. Conversely, technical solutions that are simple tend to show value quite quickly, enabling the IT team to field a significant quick-win technology to grumpy end users.

Reducing overall complexity in the IT environment removes barriers to new technology adoption, including cloud, and is a critical success requirement on the journey to becoming a more agile IT enterprise.

Need a cloud manager, but scared of the complexity presented by other solutions? Look no further than CloudBolt. Request a download today, and you'll join our happy customers in saying "CloudBolt's power is in its simplicity."
Schedule a Demo
or try it yourself
Read More

Topics: IT Challenges, Agility, IT Self Service

Why C2 is Important When Adopting OpenStack

Posted by Justin Nemmers

5/14/14 9:49 PM

“If I’m moving to OpenStack, why do I need a Cloud Manager like CloudBolt C2?”

As organizations look to extend their footprints beyond the traditional virtualization infrastructure providers (read: VMware), we hear questions like this both more frequently, and with more fervor. It’s a good question. At face value, many people see projects and products like OpenStack, and just assume that they compete directly with CloudBolt C2, but actually, when used together, the two products each provide distinct benefits that are absolutely game changing.

OpenStack Cloud Software

Despite the influx of added code and interest in Horizon, this still represents a rather significant, and complex barrier to full OpenStack adoption in the enterprise.  In my conversations with many large organizations that are implementing OpenStack, it’s become apparent that nearly every single one is either writing their own non Horizon-based front-end interface on top of OpenStack, or purchasing a commercially-available front-end (i.e. CloudBolt C2). Those organizations that are developing their own UIs are effectively signing up to maintain that code and project in-house for the life of their OpenStack environment.

Why C2?

We can look deeper into one example: updating a UI option for an instance order form. In Horizon, it requires advanced knowledge of Django and Python, and creates upgrade problems down the road. (Random aside: Want more info on UI and how difficult it is to make a good one? Read more here.) In C2, updating the order process takes a non-developer just a few clicks. Add to that C2’s built-in rates, quotas, ongoing server/application management, and software license management, and the potential value-add to the build vs. buy decision becomes quite real.

Beyond the configurability of the interface itself, there is the question of choice, and existing complexity. Chances are your IT environment contains a significant number of technologies—some of which will integrate well with OpenStack, and others that will not. And then, it apparently does matter which vendor’s OpenStack you decide to purchase, given Red Hat’s ominous announcement at the OpenStack Summit about their impending support policy changes.

Despite this concerning policy shift, OpenStack vendors will continue expanding support for proprietary tools and platforms, but are unlikely to solve the equation for every technology present in typical IT organizations’ legacy environments.  In the end, OpenStack-- from any vendor--  will force a choice: roll your own capability, or replace what you’ve got with something more OpenStack friendly. Using C2 can ease this transition by managing everything in the environment- OpenStack, legacy systems, public cloud providers, configuration management systems, etc.. End users will not know where their servers and applications are actually being deployed. IT again owns the decision of the best underlying environment for the workload.

Given these points, the difficulty of implementation and ongoing support of your existing infrastructure and environments means that the only real scenario when implementing OpenStack is to run two environments in parallel—one is your existing environment making continued use of existing integrations and technologies—and the second is the new OpenStack-based one, which will largely be a re-implementation and re-basing of both technology and process. The IT organization can then begin the task of migrating workloads from the legacy environment to OpenStack.

When run alongside existing IT, new environments absolutely benefit from a unified visualization, reporting, quotas, access, and management. This is another reason why C2 is still important in enterprises that are moving to OpenStack. Few organizations that are investing in OpenStack immediately replace their existing technology. Their environments are a mix of legacy and modern, and they need to find ways to effectively manage those stacks. Rapidly growing businesses also frequently need to ingest infrastructure and technology from acquired companies.

OpenStack is gaining significant momentum in IT, and for good reason. IT organizations looking for ways to further commoditize their technology stacks see OpenStack as a great way to build and maintain a standards-based private cloud environment, and they’re largely right. C2 is a critical component into easing the adoption of not just OpenStack, but also other disruptive technologies.

Ready to get started? Schedule a Demo

Read More

Topics: News, IT Challenges, OpenStack

The People Side of Cloud Computing

Posted by Justin Nemmers

3/26/14 2:55 PM

 (Originally posted in the In-Q-Tel Quarterly)

The cloud-enabled enterprise fundamentally changes how personnel interact with IT. Users are more effective and efficient when they are granted on-demand access to resources, but these changes also alter the technical skill-sets that IT organizations need to effectively support, maintain, and advance their offerings to end users. Often, these changes are not always immediately obvious. Automation may be the linchpin of cloud computing, but the IT staff’s ability to effectively implement and manage a cloud-enabled enterprise is critical to the IT organization’s success and relevance. Compounding the difficulties, all of the existing legacy IT systems rarely just “go away” overnight, and many workloads, such as large databases, either don’t cleanly map to cloud-provided infrastructure, or would be cost-prohibitive when deployed there. The co-existence of legacy infrastructure, traditional IT operations, and cloud-enabled ecosystems create a complicated dance that seasoned IT leadership and technical implementers alike must learn to effectively navigate.

In-Q-Tel Quarterly Image

In the past five or so years, and as enterprise IT organizations have considered adopting cloud technologies, I’ve seen dozens of IT organizations fall into the trap of believing that increased automation will enable them to reduce staff. In my experience, however, staff reductions rarely happen.  IT organizations that approach cloud-enabled IT as a mechanism to reduce staffing are often surprised to find that these changes do not actually reduce complexity in the environment, but instead merely shift complexity from the operations to the applications team. For instance, deploying an existing application to Amazon Web Services (AWS) will not make it highly available.  Instead of IT administrators using on-premises software tools with reliable access—and high speed, low-latency network and storage interconnects—these administrators must now master concepts such as regions, availability zones, and the use of elastic load balancers. Also, applications often need to be modified or completely re-designed to increase fault tolerance levels. The result is that deployments are still relatively complex, but they often require different skillsets than a traditional IT administrator is likely to have.

A dramatic shift in complexity is one of the reasons why retraining is important for existing IT organizations.  Governance is another common focus area that experiences significant capability gains as a result of cloud-enabled infrastructure.  Automation ensures that every provisioned resource successfully completes each and every lifecycle management step 100% of the time.  This revelation will be new to both IT operations and end users. I’ve also frequently seen components of the IT governance mechanism totally break down due to end user revolt—largely because particularly onerous processes could be skipped by the administrators as they manually provisioned resources.

Cloud-based compute resources will dramatically change the computing landscape in nearly any organization I’ve dealt with. For example, one IT Director worked to automate his entire provisioning and lifecycle management process, which resulted in freeing up close to three FTE’s (Full Time Equivalent) worth of team time.  Automating their processes and offering end users on-demand access to resources helped their internal customers, but it also generated substantial time savings for that team. The IT director also recognized what many miss: the cloud offerings may shift complexity in the stack, but ultimately all of those fancy cloud instances are really just Windows and Linux systems. Instances that still require traditional care and feeding from IT. Tasks such as Active Directory administration, patch management, vulnerability assessment, and configuration management don’t go away.

Another common learned-lesson that I have witnessed is that with shifting complexity comes dependence on new skills in the availability and monitoring realms. Lacking access to physical hardware, storage, and network infrastructure does not remove them as potential problem areas. As a result, Ihave seen organizations too slowly realize that applications need to be more tolerant of failures than they were under previous operating models.  Making applications more resilient requires different skills that traditional IT teams need to learn and engrain in order to effectively grow into a cloud-enabled world. Additionally, when developers and quality assurance teams have real-time access to needed resources, they also tend to speed up their releases, placing an increased demand on the workforce components responsible for tasks such as release engineering, release planning, and possibly even marketing, etc.

I’ve encountered few customers that have environments well suited for a complete migration to the public cloud. While a modern-day IT organization needs to prepare for the inevitability of running workloads in the public or community clouds, they must also prepare for the continued offering of private cloud services and legacy infrastructures. Analyst firms such as Gartner suggest that the appropriate path forward for IT orgs is to become a broker/provider of services. The subtext of that statement is that IT teams must remain in full control over who can deploy what, and where. IT organizations must control which apps can be deployed to a cloud, and which clouds are acceptable based on security, cost, capability, etc. Future IT teams should be presenting users with a choice of applications or services based on that user’s role, and the IT team gets to worry about the most appropriate deployment environment. When this future materializes, these are all new skills IT departments will need to master. Today, analyzing cloud deployment choices and recommending the approaches that should be made available are areas that typically fall outside the skillsets of many IT administrators. Unfortunately, these are precisely the skills that are needed, but I’ve witnessed many IT organizations overlook them. 

The Way Ahead

While IT staff can save significant time when the entirety of provisioning and lifecycle management is automated, there are still many needs elsewhere in the IT organization.  The successful approaches I’ve seen IT organizations use all involve refocusing staff to value-added tasks. When IT administrators are able to spend time on interesting problems rather than performing near-constant and routine provisioning and maintenance, they are often more involved, fulfilled, and frequently produce innovative solutions that save organizations money. Changing skillsets and requirements will also have a likely affect on existing contracts for organizations with heavily outsourced staffing.  

Governance is another important area where changes in the status quo can lead to additional benefits. For example, manually provisioned and managed environments that also have manual centralized governance processes and procedures typically have significant variance in what is actually deployed vs. what the process says should have been deployed: i.e. processes are rarely followed as closely as necessary. No matter how good the management systems, without automation and assignment, problems like Virtual Machine “sprawl” quickly become rampant. I’ve also seen scenarios where end users revolt because they were finally subjected to policies that had been in place for a while, but were routinely skipped by administrators manually provisioning systems. Implementing automation means being prepared to retool some of the more onerous policies as needed, but even with retooled processes, automated provisioning and management provides for a higher assurance level than is possible with manual processes.

Automation in IT environments is nothing new. However, today’s IT organizations can no longer solely rely on the traditional operational way of doing things. Effective leadership of IT staff is critical to the organization’s ability to successfully transition from a traditional provider of in-house IT to an agile broker/provider of resources and services.  Understanding the cloud impacts much more than just technology is a great place to start.  This doesn’t mean that organizations that are currently implementing cloud-enabling solutions need to jam on the brakes, just realize that the cloud is not a magic cure-all for staffing issues. Organizations need to evaluate the potential impact of shifting complexity to other teams, and generally plan for disruption. Just as you would with any large-scale enterprise technology implementation, ensuring that IT staff has the appropriate skills necessary to successfully implement and maintain the desired end state will go a long way to ensuring your success.  


 

Justin Nemmers is the Executive Vice President of Marketing at CloudBolt Software, Inc.CloudBolt’s flagship product, CloudBolt C2, is a unified IT management platform that provides self-service IT and automated management/provisioning of on-premises and cloud-based IT resources.  Prior to joining the team at CloudBolt, Nemmers has held both technical and sales-focused leadership roles at Salsa Labs, and Red Hat, where he ran government services. Nemmers resides in Raleigh, NC with his wife and daughter.

Read More

Topics: IT Challenges, Cloud, People

The Conflict Between IT and Business: 5 Steps to a Solution.

Posted by Justin Nemmers

9/26/13 4:57 PM

IT exists to serve the business. The business is made up of users, with requirements, that need IT resources in a timely manner. Back when I was an admin with the Government, we used to joke with among the IT staff that our lives would be much easier without users. Funny thing is, I’ve found this to be a pretty common sentiment across IT organizations. 

IT Conflicts With Business

The conflict between IT Organizations and the businesses they are tasked with supporting has existed since non-technical business people started using IT. The IT enterprise is a fundamentally complicated environment that takes both specific skills to craft and maintain. The language of Enterprise IT is radically different than that of the business. IT speaks about servers, resources, software licenses, infrastructure, technology, and capacity. The Business uses language like budgets, margins, time-to-market, cost accountability, end user experience, responsiveness, compliance, advantage, reporting and agility.

Because of these conflicting concerns, business often doesn’t appreciate the complexity of a seemingly simple request such as:

  • “I need more resources”, or
  • “What was the cost for user’s project?” Or,
  • “We need X capability in our product.”

IT and business not speaking the same language results in obtuse responses:

  • “How many resources, and of what type, to be used for what purpose, and where?”
  • “Give us specifics about what the project was, and we’ll try to get the information.”
  • “What application do you want us to use?”

This back-and forth doesn’t produce results as needed, and is part of the reason IT and business frequently struggle when communicating about requirements, and why the IT Administrators—the ones tasked with keeping the enterprise IT environment moving in the right direction—end up with a negative view of each other. 

Current Tools are Not the Answer

By necessity, organizations have adopted various tools and technologies intended to help narrow this communication gap. IT teams employ all sorts of overlapping, complicated tools in their attempts to generate answers to business’ leadership’s difficult questions. 

IT organizations’ attempts to answer these questions include single-purpose tools like chargeback managers, various IT and business intelligence tools, configurations managers, software license management tools, a CMDB and so on. The problem is that none of these tools talk with one another, and many have overlapping capabilities. For instance, to answer a question regarding how many copies of a license are in use, where should an IT administrator look?  The configuration manager might not tell you that a system has been decommissioned. The License Manager might not capture multiple copies of the same license. Neither tool does a good job of assigning ownership. 

So to answer difficult business questions, and even with complicated and feature-rich tools, IT Organizations are inevitably left with Excel spreadsheets trying to track interrupt-driven requests with an error prone and largely manual process. It’s unsustainable! 

A Robust Cloud Manager Can Help Answer the Questions

Reconciling these issues does not have to be complicated or difficult, though. Using a complete cloud manager to gather real-time information from underlying tools such as virtualization and configuration management can help eliminate the spreadsheet jockeying that has to happen, and essentially eliminates the time needed to gather the needed data. Next, because a Next Generation Cloud Manager abstracts the underlying technology, IT Organizations are able to layer in additional tools to help complete the picture for both end users and IT alike.

Even for Organizations that have relatively mature IT operations and processes, the difficulties present in collecting relevant data can be notoriously difficult, and even in the best environments, end users are rarely treated to such transparency in metrics like consumption, cost, an utilization. If you are a program manager, it would be nice to have real-time access to that information in order to chart your own team’s progress. 

Actionable Data is the Answer

Consider your personal finances. Without a tool like Quicken or Mint (or any of the other similar tools), keeping track of every little in or outflow of cash would be a nightmare. Between iTunes purchases, Netflix subscriptions, cable television, Internet, car payments, restaurants bills, groceries, cell phones, insurance and bar tabs, quickly answering questions with actionable data becomes difficult:

  • How much did I spend on entertainment last month?
  • What is my average spending on utilities over the past six months?
  • Which vehicle is costing me the most for gas?

The biggest difference between how IT and Business communicate is what data each views as actionable. Closing that gap with a tool that allows IT to provide Business with the information they need to make effective decisions will lessen the conflict and ease tensions between the two parties in any organization. 

The Steps to Helping IT Talk to Business

Given that current tools do a poor job of providing real business answers, how does an IT organization begin to implement the right tools and processes to effectively provide the needed information? 

1) Identify the information gaps.
What business questions does IT lack any real data on? These needs can range from information about deployed licenses, location and configuration of systems, or software supporting a given application, to what groups are using which resources. The types of information gaps present will dictate capability requirements of selected technology (or technologies). 

2) Embrace automation and IT self-service.
In the past, the idea of giving users access to self-service IT struck fear into the hearts of IT Administrators. Why? Being in constant control of their environments is part of the job description and letting users actually touch systems can radically affect system quality and uptime. When self-service IT is coupled with automation, and the automation platform can ensure the appropriate policies and procedures are followed, IT Administrators can rest assured that the Self-Service IT process is fully governed, and thus, they’re still in full control and quality is protected.

3) Make sound technology decisions.
When choosing technologies to fill the information gaps, look outside of your core vendors.

Going with the same vendor suite that provided your virtualization system might seem like a good idea, but promoting vendor lock-in at this level can be very costly for an IT organization, and limits choice and capability both initially and downstream.

Choosing a Cloud Manager that will play well with your existing and varied environments is also critical. IT Administrators must have the ability to make continued use of underlying management tools if needed. Discovery of virtual and cloud resources are critical: a Cloud Manager needs to overlay its tracking and measurements on top of existing environments.

Heterogeneity will be unavoidable, but heterogeneity itself is not an issue with the right Cloud Manager.

4) Ditch the Spreadsheets.
Fact: IT Administrators hate using spreadsheets to track critical aspects of the environments they manage. They’ll be relieved to know that there’s something else to keep track of these metrics, and in real-time at that!

5) Create and Schedule Reports
Using the requirements from step 1, use the Cloud Manager to create and automate reports that pull information from the various needed technology classes. For instance, reporting on a specific project’s IT cost would consolidate information on that team’s usage from your virtualization, configuration management, license management, and public cloud tools. And, of course, make sure that the Cloud Manager does the math for you.

IT teams that work to build understanding about the types of questions Business wants answered will find more success. Select the right technology, and focus on delivering the types of actionable information the Business needs. Tweak it, refine it, and remember that it’ll change as the needs of the business shift. With the above points, however, you’ll be on the path to success.

Learn more about how CloudBolt C2 helps solve this problem.

 

Read More

Topics: IT Challenges, Business Challenges

Cloud Brokers: Don’t Buy One, Use a Cloud Manager to Be One

Posted by John Menkart

9/18/13 2:25 PM

“Private clouds will become hybrid, and enterprise IT organizations will move beyond the role of hosting and managing IT capability to becoming brokers of IT sourcing - delivered in many ways.” wrote Thomas Bittman , VP Distinguished Gartner Analyst for the upcoming Gartner Webinar: Hybrid Clouds and Hybrid IT: The Next Frontier, Date: 03 October 2013.

The IT world is abuzz with the term “Cloud Broker”. Seemingly every vendor wants your enterprise to buy “their Cloud Broker”. The fact that they are so anxious to sell a Cloud Broker is in fact proof they don’t fully understand the meaning of the term.

Become or Purchase a Cloud Broker/Provider

Today’s enterprise IT organizations are struggling to remain in control of internal and external IT resources being consumed by their business. These IT Organizations face a triple challenge in that they must:

  1. For security and accountability reasons, gain control of IT resources being provisioned and consumed by the Lines of Business, regardless of those resources being delivered from an internal community cloud, or public clouds like AWS, Verizon/Terremark or Rackspace.
  2. Be more oriented towards the Lines of Business in the enterprise. Hand waving in response to direct questions like; ‘What is the cost associated with IT support for our engineering group?’ Or, ‘How much are we spending monthly on that customer service application for finance?’ is no longer acceptable. IT Organizations have to deliver real answers.
  3. Be orders of magnitude faster and more responsive in providing access to internal IT resources. The speed and agility required to keep Lines of Business happy with their IT groups is well beyond the capabilities of most IT shops, and requires a level of IT automation found in a minority of organizations today.

Addressing all of these challenges requires that IT organizations implement a solution that manages IT resources in a unified way, regardless of whether the resources are deployed internally, or externally in one or more public clouds. The managed resources need to be controlled and reported on in a business context-sensitive way. Finally the solution needs to allow resources to be provisioned rapidly (and in a self service manner) and effectively retired in an accountable and orderly fashion, regardless of location or type of environment in which they reside.

When an IT organization addresses these challenges and functions in this manner, the IT organization itself has become both a Cloud Broker, as well as a provider for its customers. Merely purchasing a Cloud Broker alone ignores the significant role IT Organizations must play in the governance of their environments, and thus, Enterprise IT risks irrelevance if they merely purchase a Cloud Broker vs. becoming a broker/provider.

 

Read More

Topics: Public Cloud, IT Challenges, Private Cloud, John

The Cloud Management User Interface Last Mile

Posted by Justin Nemmers

8/19/13 8:26 AM

Cloud Manager User interfaces are difficult to create.  Not only does every vendor out there have their own ideas, but the sheer number of UI toolkits available mean that even if two vendors have similar ideas, the end result will look totally different based on the underlying technology.

UI elements diverse use

This issue only exacerbated when you start to look at the various management tools in use by the typical IT management environment.  An IT admin will often interface with half a dozen systems on a daily basis, each with its own UI, its own way of doing things, and its own workflows that must be separately understood.  It’s complicated.  And it just makes supporting a persnickety pile of users that much more difficult.

The user interface last mile is the point at which a cloud manager’s user interface effectively abstracts the various underlying technologies it manages.  The broader the supported underlying technologies are, the better the UI needs to be at presenting those capabilities in a sane, predictable manner.  Connecting with numerous different types of underlying technologies only makes the problem more difficult.

Creating a User Interface is hard work.  Creating a good User Interface requires even more significant effort, training, and understanding of how the design of the interface relates to the problem it’s trying to solve.  Form follows function, but in order to effectively create something, one must really understand the function.

Different designers, different perspective

Understanding the function alone isn’t really enough, though.  Different UI designers might perceive the functionality in different ways, creating significant difficulties for how the UI is implemented.  For starters, the designer’s level of experience with the target environment is an issue.  Terminology is another area that can cause difficulty.  Different designers may have a different understanding of what the commonly accepted terminology is- for instance, the difference between software licenses being “used” vs. “deployed” is pretty important, as they’re two different things.

Where the designer’s primary experience originates from also makes a difference in the UI that they will produce.  Without the credibility and experience in the data center, the resulting UI can be confusing. 

Vendor Bias

An infrastructure or other large software vendor might very well use the UI as a tool to bias the end user experience toward a specific technology or solution.  For instance, in a cloud manager, a large whole-suite vendor is likely to ensure that the cloud manager integrates better, and has better UI functionality for other technologies in their stack like virtualization, orchestration, and configuration management, but chose not to put the same effort into integration with 3rd party vendor products.  This approach causes two problems:

  • Reduces customer choice
  • Makes it difficult to add additional technologies as needed

This bias is an important thing to take into account when making a technology decision.  Vendor X may have a good product, but when it claims to be unbiased, it’s probably not true.

Disparate technology classes

In order for a UI to be effective, it’s got to do a good job of componentizing and standardizing display items and values in a manner so that it can more fully abstract the underlying complexity from end users, who generally don’t care that Applications in Puppet are called “Classes”, “Recipes” in Chef, and then just “Applications” in HP Server Automation.  A well-written UI will make the right decision, and present users and IT administrators with something that makes sense, irrespective of what the underlying technology might call it.  This also helps with extensibility, as IT Administrators can implement new technologies without the worry that they’ll have to fight with end users about changing the nomenclature in an environment.

Different use cases

Both IT administrators and end users are target users of cloud managers.  Each of these user types, however, has a different understanding of what’s happening.  A good UI will effectively abstract the underpinnings from users, but perhaps make those same underlying components visible to administrators. 

Different users can also have different understanding levels.  A UI cannot be so strict as to mandate each and every user sees the same thing at all times.  Nearly every UI element needs to be customizable to effectively mold itself to match the user’s understanding. 

A cloud manager UI also needs to strike a balance between the potential for deeper-level administrative tasks (such as managing VM migration between hosts, or creating a new application in a CM tool) and general usability.  As cloud managers are just layers above the existing tools, there are things that will always make more sense to use the underlying tool to do.  From an administrative point-of-view, not every possible option and capability can be exposed.  This is important because while a good cloud manager UI will never be intended to accomplish every little underlying management capability, there needs to be a strong balance between breadth of capability and ease-of-use.  If the balance is off in either direction, administrators and users alike will be frustrated with the interface. 

The difficulties of open source UIs

Many open source tools come with UIs these days, but they end up falling into the categories above.  Project teams tasked with building and maintaining these projects are often employed by different companies.  Unification doesn’t happen too frequently in these projects.  Even two products from the same company can have wildly different look and feel based on who originally developed it.

This leads to a natural conflict of “your UI or mine”.  There ends up being inherent conflict between competing UIs, often leaving customers to choose between UIs depending on what task needs to be accomplished.  This, of course, somewhat defeats the purpose, as there is no true single pane of glass management. 

The Last Mile

All of these points come together to make a case for a tool that has both a powerful and flexible UI.  A UI in this case needs to do a few things to be useful:

  • Integrate with a wide range of technologies
  • Independently integrate with each technology class
  • Provide a mechanism to centrally access common tasks
  • Intuitively offer appropriate choices to various users

These aren’t trivial tasks to accomplish.  Getting them right takes significant skills, expertise, and credibility in the data center to get the use cases correct.

In short, it’s not for the faint-of-heart, and not everyone can do it.  Have a look at CloudBolt C2 to see how the C2 User Interface is the most intuitive and powerful interface available in a cloud manager.

Read More

Topics: IT Challenges, Cloud Manager, Implementation

The C2 Cloud Manager Value Play: IT in a Business Context

Posted by Justin Nemmers

5/13/13 12:53 PM

 car fleet cloud manager CFO and CTO

The march toward simplicity in technology and data centers is one that grows more difficult with every technical innovation that occurs. For years, CIOs and IT managers have maintained that standardization on a select provider’s toolset will help simplify their IT enterprise. “Standardize!  Reduced fragmentation will set you free,” the typical IT vendors will shout. However, reality is just not that simple. I’ve made some other cases for why the mentality of strict standardization isn’t necessarily all it’s cracked up to be, but I’m going to take a different approach this time. 

One problem that I hear pretty frequently when talking to customers’ non-IT leadership and management is that they frequently lament that their IT organizations just don’t understand how the business actually needs to; not just consume IT, but also track and measure various metrics from an IT organization in ways that make sense to the business.

Let’s look at this a bit more practically for a moment. I used this analogy with a CFO last week, and it resonated well in describing the real issue that the non-IT leadership types have with IT as a whole.

In a large pharmaceutical company, there is a fleet of company-owned cars. A recall is needed on one year of a particular model because of poor paint quality. Upon learning that information, the fleet manager can not only tell you exactly how many of those cars she has in her fleet, but also tell you exactly who each car is assigned to, which ones are green, and the home address of that car. The fleet manager is able to present information about her part of the business in a way that makes sense to management. How is it that IT does not operate with the same level of intelligence?

Now let’s apply the same thought process to IT. The CFO wants to know what percentage of the IT budget is being used by a particular project. Enter the IT organization. The real numbers behind the CFO’s request are daunting. The IT organization is juggling thousands of VMs, different licensing models and costs for software, different hardware, multiple data center locations, and a convoluted org chart, just to name a few. Different environments have different cost structures, and therefore add complexity to reporting because of the requirement to understand not just what a VM is running, but where it is running.

And that’s a relatively uncomplicated example. What happens when you start to add things like applications, software licenses, configuration management tools (HP SA! Puppet! Chef! Salt Stack!), multiple data centers, differing virtualization technologies (VMware! Xen! KVM!), multiple versions of the same technology, multiple project teams accessing shared resources, multiple Amazon web services public cloud accounts, etc.

From a seemingly simple request, we have revealed the main frustration that the non-IT leadership faces nearly every time they have a seemingly simple request. At core to the problem is that the IT processes and technologies were not built in a way to provide this transparency. Instead, technologies such as virtualization, cloud, networking, etc. were designed and implemented to provide high availability, and meet an SLA. They were not designed to offer reporting transparency, or cost accountability. The end result:  IT the Business cannot understand IT, and vice versa.

The good news is that the capabilities needed to resolve this imbalance are present today. When implemented in an environment, CloudBolt enables IT managers to answer the questions their non-IT leadership is asking. “CloudBolt enables IT in a Business Context”. CloudBolt C2 solves more than just the problems that CIO, CTO, and IT Directors and Managers have. For the first time, C2 enables the non-tech leadership to view IT in a way that’s analogous to how they look at any other portion of their business, which is both good for business, and IT. 

It’s time for IT in a Business Context. It’s time for Business-Driven IT.

Take a look at our Benefits Overview, and see how we can make a difference today.

Read More

Topics: IT Challenges, Enterprise, Business Challenges, IT Organization, Vendors

What is Plain Old Virtualization, Anyway? Not Cloud, That's What.

Posted by Justin Nemmers

5/6/13 3:57 PM

it organization underwater overloaded

I speak with a lot of customers. For the most part, many understand that virtualization is not actually “Cloud”, but rather an underpinning technology that makes cloud (at least in terms of IaaS and PaaS) possible. There are many things that a heavily virtualized environment needs in order to become cloud, but one thing is for certain: “Plain Old Virtualization” needs to learn a lot of new tricks in order to effectively solve the issues facing today’s IT organizations.

Many of those same organizations find themselves constantly underwater when it comes to the expectations from the business they’re tasked with supporting. The business wants X, the IT organization has X-Y resources. Cloud is an important tool that will help narrow this gap, but IT organizations need the right tools to make it happen.

Virtualization Alone is no Longer Sufficient

At plain old virtualization’s core is the virtualization manager. Whether it’s vCenter or XenServer, or some other tool, plain old virtualization lacks the necessary extensibility to get organizations to cloud. Even virtualization managers that have added some capabilities like a self-service portal or metered usage accounting are fundamentally just a hypervisor manager, and typically they only focus on their own virtualization technology.

Plain old virtualization doesn’t understand your business, either. It is devoid of any notion of user or group resource ownership, and lacks the flexibility needed to layer in the organizational structure into the IT environment. Instead of presenting various IT consumption options, plain old virtualization tells an organization how it needs to consume IT—in other words, IT Administrators have to get involved, chargeback isn’t possible, and the technology has little if any understanding of organizational or business structure. 

Plain old virtualization is a solved problem. The value proposition for virtualization is well understood, and accepted in nearly every cross-section of IT. Virtualization managers have matured to enable additional features like high availability, clustering, and live migration, which have allowed IT organizations to remove some unneeded complexity from their stacks.

Failings of Plain Old Virtualization Managers

Many vendors that offer perfectly good plain old virtualization managers are in a process of metamorphosis. They’re adjusting their products, acquiring other technologies, and generally updating and tweaking their virtualization managers so the vendors can claim they “enable cloud”. Whether the new capabilities are added as layered products that are components of a (much) larger solution suite, or merely folding those capabilities into an ever-expanding virtualization manager, the result is a virtualization manager that tries to be more than it is. The customer ultimately pays the price for that added complexity, and often, experiences increased vendor lock-in.

One of the many promises of cloud is that it frees IT organizations to make the most appropriate technology decisions for the business. This is where plain old virtualization that is trying to be cloud really gets an IT organization in trouble. Often, the capabilities presented by these solutions are not sufficient to solve actual IT issues, and the effort to migrate away from those choices is deemed too costly for IT organizations to effectively achieve without significant re-engineering or technology replacement.

CloudBolt Effectively Enables Cloud From Your Virtualization

The good news is that IT organizations don’t need to do entire reboots of existing tech in order to enable cloud in their environments. CloudBolt C2 works in conjunction with existing virtualization managers, allowing IT organizations to present resources to consumers in ways that make sense to both business and consumer alike. C2 does not require organizations to replace their existing virtualization managers; instead, it provides better management and a fully functional and interactive self-service portal so IT consumers can request servers and resources natively.

Flexibility in the management layer is critical, and one place where plain old virtualization tools fall down pretty regularly. C2 is a tool that effectively maps how your IT is consumed to how your business is organized. Try to avoid inflexible tools that offer your IT organization little choice now and even less going forward into the future.

Read More

Topics: IT Challenges, Management, Virtualization, IT Organization

Why Manual VM Provisioning Workflows Don't Work Anymore

Posted by Justin Nemmers

3/25/13 10:40 AM

Let’s look through a fictional situation that likely hits a little close to home.

An enterprise IT shop receives a developer request for a new server resource that is needed for testing. Unfortunately, the developer request doesn’t include all of the information needed to provision the server, so the IT administrator goes back to the developer, and has a discussion about the number of CPUs, and the amount of RAM and storage are needed. That email back-and-forth takes a day. Once that conversation is complete, the IT admin creates a ticket and begins the largely manual workflow of provisioning a server. First, the ticket is assigned to the storage team to create the required storage unit. The ticket is addressed in two days, and then passed on to the network team who ensures that the proper VLANs are created and accessible, and to assign IP addresses. The network team has been pretty busy, though, so their average turnaround is greater than four days. Then, the ticket is handed off back to the virtualization team, where the instance is provisioned, but not until two days later. Think it’s ready to hand off to the user yet?  Not yet.

manual provisioning, workflows, old-school assembly lineAn assembly-line model cannot deploy VMs as rapidly as needed. Automation is required.

The team that manages the virtual environment and creates the VMs is not responsible for installing software. The ticket is forwarded along to the software team, who, three days later, manually installs the needed software on that system, and verifies operation. The virtual server is still not ready to hand off to the developer, though!

You see, there’s also a security and compliance team as well, so the ticket gets handed off to those folks, who a few days later, run a bunch of scans and compliance tests. Now that the virtual resource is in it’s final configuration, it’s got to be ready, right?  Nope. It gets handed off to the configuration management team who then must thoroughly scan the system in order to create a configuration instance in the Configuration Management Database (CMDB). Finally, the instance is finally ready to be delivered to the developer that requested.

The tally is just shy of three full business weeks. What has the developer been doing in the meantime?  Probably not working to his or her full capacity.

Circumventing IT Completely with Shadow IT

Or, maybe that developer got tired of waiting, and after two days went around the entire IT team and ordered an instance from AWS that took five minutes to provision. The developer was so excited about getting a resource that quickly that they bragged to the fellow developers, who in turn start to use AWS.

Negative Effects on IT and the Business

Either way, this is a scenario that plays out repeatedly, and I’m amazed at how frequently it plays out just like this. The result might initially appear to just be some shadow IT, or maybe some VM sprawl from unused deployed instances, however, the potential damage to both the IT organization and the business is far greater.

First, users frequently circumventing the IT organization looks bad. These are actions that question the IT organization’s ability to effectively serve the business, and thus strike at the very heart of the IT group’s relevance.

Furthermore, the IT Consumers are the business. Ensuring that users have access to resources in near-real time should be a goal of every IT org, but rapidly adjusting and transforming the IT teams and processes doesn’t work as quickly as demand changes. This means that the IT org cannot respond with enough agility to continually satisfy the business needs, which in turn potentially means more money is spent to provide less benefit, or even worse, the business misses out on key opportunities.

IT shops need to move beyond simple virtualization and virtualization management. Why? Improved virtualization management does not solve all of the problems presented in the scenario above, while (and this is key here) also providing for continued growth. Implementing tools that only manage virtualization only solve part of the problem, because they are unable to properly unify the provisioning process around software (by going beyond plain template libraries with tools like HPSA, Puppet or Chef), and other external mechanisms (like a CMDB). In order to fully modernize and adapt existing processes and teams to a cloud/service oriented business model, all aspects of the provisioning process must be automated. It’s the only way an IT organization can hope to stay responsive enough, and avoid being locked into one particular solution, such as a single-vendor approach to virtualization. A well-designed and implemented Cloud Manager will give an IT org the freedom to choose the best underlying technology for the job, without regard for how it will be presented to the end user.

Either way you look at it, IT organizations need a solution which will allow them to utilize as much of their existing assets as possible while still providing the governance, security, and serviceability needed to ensure the company’s data and services are well secured and properly supported.

The Solution

Thankfully, there’s just such a Cloud Manager. CloudBolt C2 is built by a team with decades of combined experience in the systems management space, and was created from the beginning to solve this exact problem. Because we started from the first line of code to solve this entire problem, we call ourselves the next-generation cloud manager, but out customers call it a game changer. Give it a download and effortless install today, and we’ll show you that CloudBolt C2 mean business.

Read More

Topics: Customer, IT Challenges, Management, Virtualization, Cloud Manager, Shadow IT, Agility