CloudBolt Software Logo

CloudBolt Blog

Justin Nemmers

Recent Posts

7 Takeaways From the Red Hat Summit

Posted by Justin Nemmers

6/19/13 8:27 AM

CloudBolt Booth Red Hat Summit Boston John Menkart Justin Nemmers Colin Thorp Jesse NewellPart of the CloudBolt team at Red Hat Summit 2013.  Sales Director Milan Hemrajani took the picture.

A few sales folks and I have returned from a successful Red Hat Summit in Boston, MA. With over 4,000 attendees, we were able to leverage an excellent booth position to talk to many hundreds of people. One of the things that I love about my role here at CloudBolt is that I am constantly learning. I particularly enjoy speaking with customers about the types of problems they run across in their IT environments, and I take every chance I can to learn more about what their IT challenges are. Some of these are common themes that we hear a lot here at CloudBolt, and a few were a bit surprising as some organizations are still in earlier stages of their modernization efforts that I would have expected.

  1. Not everyone has heavily virtualized his or her enterprise.
    Sure, there are some environments where virtualization doesn’t make a lot of sense—such as parallelized, but tightly CPU-bound workloads, or HPC environments. But what surprised me were the number of organizations I spoke with that made little, or highly limited use of virtualization in the data center. It’s not that they didn’t see the value of it, more often than not, they still made use of Solaris on SPARC, or had old-school management that had not yet bought into the idea that running production workloads on virtualized serves has been a long-accepted common practice. For these folks and others, I’d like to introduce you to a topic I like to call “Cloud by Consolidation” (in a later blog post).
     
  2. Best-of-Breed is back.
    Organizations are tired of being forced to use a particular technology just because it came with another product, or because it comes from a preferred vendor. For example, an IT organization is pressed to use sub-optimal technology because it came with another suite of products. Forcing an ill-fitting product on a problem often results in longer implementation times, which consume more team resources over just implementing the right technology for the problem at hand. Your mechanic will confirm that the right tool makes any job easier. It’s not any different with enterprise software.
    • The gap between things like CloudForms (Formerly ManageIQ) it’s ability to manage OpenStack implementations
    • Nicira Software Defined Networking and the ability to manage it with vCloud Automation Center (vCAC, formerly DynamicOps)Either way, customers are tired of waiting as a result of vendor lock-in.
       
  3. Customers are demanding reduced vendor lock-in.
    IT organizations have a broad range of technologies in their data centers. They need a cloud manager that has the capabilities to effectively manage, not just what they have installed today, but what they want to install tomorrow. For example, a customer might have VMware vCenter today, but is actively looking at moving more capacity to AWS. Alternatively, they have one data center automation tool, and are looking to move to another (see my next point below, #4). Another scenario is not having to wait for disruptive technology to be better supported before getting to implement and test it in your own environment—while being managed with existing technology. Good examples:
     
  4. Customers are increasingly implementing multiple Data Center Automation (DCA) tools. 
    This is a bit interesting in the sense it used to be that an IT organization would purchase a single DCA technology and implement it enterprise-wide. I was surprised to hear the number of customers that were actively looking at a multiple DCA strategy in their environments. Our booth visitors reported that they primarily used HP Server Automation, and to a lesser extent BMC’s BladeLogic. Puppet and Chef were popular tools that organizations are implementing in growth or new environments—like new public cloud environments. Either way, these customers see the definitive value in using CloudBolt C2 to present DCA-specific capabilities to end users, significantly increasing the power of user self-service IT while at the same time decreasing complexity in the environment.
     
  5. Lots of people are talking about OpenStack. Few are using it.
    For every 10 customers that said they were looking at OpenStack, 10 said they were not yet using it. There’s certainly been an impressive level of buzz around OpenStack, but we haven’t seen a significant number of customers that have actively installed and are attempting to use it in their environments. I think that Red Hat’s formal entry into this space will help this, because they have a proven track record of taming the seemingly untamable mix of rapidly-changing open source projects into something that’s supportable in the enterprise. I have no doubt that Red Hat will be able to tame this into something usable. This does not, however, mean that customers will be making wholesale moves from their existing (and largely VMware-based) virtualization platforms to OpenStack. Furthermore, there are still significant market confusion in regards to what Red Hat is selling. Is it RHEV? Is it OpenStack? Do I need both? These are all questions I heard more than once from Customers in Boston.
     
  6. Open and Open Source aren’t the same thing.
    I spent too many years at Red Hat to know that this is the case, but I feel it’s extremely important to mention it here. Many customers told us that they wanted open technologies—but in these cases, open meant tools and technologies that were flexible enough to interoperate with a lot of other technologies, and reduce overall vendor lock-in. Sure, an Open Source development model could be a plus, but the customers were most interested in their tech working, working well, and working quickly.
     
  7. Most IT Orgs want Chargeback, but few businesses are willing to accept it.
    Thus far, the only groups that I’ve chatted with whom actually use some chargeback mechanism are service providers that have external customers. Pretty much every other IT Organization seems to face significant pressure from the businesses they support against chargeback. Showback pricing helps counter this resistance, and over time should help more IT organizations win the battle over chargeback. IT Organizations should be leaping at the chance to collect and report on per-group or project cost reporting. It’s a critical piece of information that businesses need to make effective decisions. Business-Driven IT has been a necessary step in the evolution of IT for a long, long time. IT needs to get with the program and make visible to the business the types of information the business needs to make effective decisions. And on the flip side, the business needs to get with the program and accept that their teams and projects will have to be held responsible for their IT consumption.

So how do you get started recognizing the value of integrating IT with the business? Start here.

We’re looking forward to exhibiting at the next Red Hat Summit, which is slated to be held in San Francisco’s Moscone North and South exhibition center. And if you thought we made a big splash at this year’s summit…  Just wait to see what we have in the works!

Read More

Topics: Virtualization, Cloud, Enterprise, Red Hat, Challenges, Vendors

CloudBolt C2 is the Cloud Manager for the Dell Cloud for Government

Posted by Justin Nemmers

6/11/13 10:57 AM

I’m thrilled to announce that CloudBolt C2 is the Cloud Manager Dell is using in their recently announced Dell Cloud for US Government. In this solution, CloudBolt C2 provides the automated workflows, provisioning, rapid scalability, and metered pricing customers need in order to become their own cloud provider.

Dell Cloud for US Government uses CloudBolt C2

The Dell solution enables organizations to take advantage of the cloud delivery model to provide a range of on-demand resources to end users in a predictable and reliable manner, all while using infrastructure that meets various US Government security criteria including:

  • NIST 800-53
  • FedRAMP
  • FISMA Low and Moderate
  • DIACAP
  • NIACAP
  • HIPAA 

This solution is being offered two ways:

  • Dedicated solution either hosted or installed in a customer environment
  • Hosted multi-tenant on-demand cloud

Either way, Dell has the ability to provide either solution in a manner that meets the broad range of security criteria Government Customers are faced with.

This solution required a powerful Cloud Manager that could not just offer an intuitive, easy-to-use interface, but also one that could just as easily support multi-tenant environments as it could single tenant ones. C2’s Section 508 compliance, robust orchestration layer and the ability to plug into nearly any required technology made it the natural and secure fit for the Dell Cloud for US Government solution.

Dell offers this solution with a flexible acquisition model: either can be purchased with enough capacity for as few as 100, and all the way up to 100,000 or more VMs. This Dell Cloud for US Government solution can be Dell hosted, installed in a customer environment, or offered as a hybrid model. No matter how you chose to consume it, the capabilities and certifications are the same. Dell has rolled in over 270 security controls to help customers attain and track any ATOs needed to run in their environments.

Dell’s FedRAMP Cloud builds on the capabilities of the NIST Dedicated Cloud solution and adds the required security controls to achieve FedRAMP certification. This Dell-hosted multi-tenant environment allows public cloud-like metered on-demand access to secure computing resources. Because this solution comes with FedRAMP certification, no additional ATOs are needed for those agencies able to run FedRAMP-approved solutions.

Dell Federal Services CTO Jeff Lush has a series of YouTube videos where he highlights the capabilities of this solution.

Customers will use CloudBolt C2 to request on-demand Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) resources, which will automatically be provisioned, tracked, and managed in an ongoing basis. Organizations that deploy the dedicated solution will get access to the full suite of C2 capabilities, including multi-cloud management, which will enable those customers to manage other Virtualization or Cloud environments as well.

CloudBolt C2’s power and flexibility were key reasons why Dell chose C2 for this solution. Interested in learning more? Give us a ring at 703.665.1060.

(FedRAMP stands for Federal Risk and Authorization Program. See more info about that here.)

(Dell is a registered trademark of Dell, Inc.)

 

 

Read More

Topics: News, Cloud, Private Cloud, Government, Vendors

Have Your Cloud Vendors Spent Time in the Data Center?

Posted by Justin Nemmers

5/30/13 5:34 PM

data center vendor virtualization and cloud planning

As part of a project kickoff meeting yesterday, I walked through a massive data center in Northern VA. It’s the same one that houses huge portions of the Amazon Web Services’ us-east region, amongst nearly every other major ‘who’s who’ of the Internet age.

Of the various people that accompanied me on this tour, there were several that marveled at both the expansive magnitude, as well as the seemingly strict order of cages, racks, and hallways alike. Seeing this all through the eyes of folks that had not been in a data center before got me thinking about a simple question: “Has your vendor spent time in a data center?”

I pose this question both literally and figuratively. For enterprises, the data center is more than just a location. The data center encompasses not just a location, but business logic, processes, software, licenses, infrastructure, personnel, technology, and data. Saying something is “in the data center” imparts a certain gravity, meaning that a person has implied capability, responsibility, and knowledge. For a technology, being in the data center means that it’s likely a critical component of the business. By being “in the data center”, a technology has most likely met numerous standards for not just functionality to the business, but also reliability and security.

When it comes to IT environments, then, there are really two categories when it comes to the data center. Those technologies, people, and businesses that have experience working in one, and everyone and everything else. In no place is this notion more important, and true, than in Enterprise IT.

Innovation happens in the data center because of the unique problems encountered with IT at scale. If a vendor is not familiar with the types of issues organizations face at the data center scale, they’ll likely discover numerous limitations in capability and process alike. Furthermore; and a bit more insidious, is that the vendor doesn’t understand how IT organizations interact with and otherwise manage the data center environment in the first place.

Actual results may vary, but I’d venture that many solutions born in places other than the data center tend to cost more to implement, and have more thorny integration issues than promised—as a likely result of forcing the hand of IT organizations and business to effectively and wholly change their approach rather than just presenting a technology that fuses well with existing process, then presenting those same organizations with the choice of when and how to evolve.

IT organizations need solutions born in and for the data center. Looking to a team that has significant experience building, managing, selling to, and supporting the data center environment can be a significant benefit to IT organizations. Thankfully, CloudBolt is just one such company with substantial data center experience. This results in C2 being designed and built with the data center in mind. This has several effects:

  • For one, we’ll understand your actual problem, not the problem we, as a vendor, want you to have.
  • Two, we’ll be dis-inclined to wedge our product in places where it doesn’t fit well merely because we know what it takes to support a product in the enterprise, and we definitely don’t want to support an ill-fitting product in a data center.
  • Lastly, we’ll allow you to both implement the new tool, and continue to keep up business-as-usual. No sweeping, massive change required up front.

Collectively, our team has spent over 40 person-years in the data center. It shows in how we interact with customers, and it definitely shows in our product. 

Why not give our whitepaper a once-over, and then take C2 for a test drive?

Read More

Topics: Enterprise, Challenges, Vendors, Data Center

New Release: CloudBolt C2 v3.7.2 advances on capabilites

Posted by Justin Nemmers

5/30/13 8:49 AM

automated provisioning gears complex

Our engineering team continues to incorporate significant capability into the C2 platform. Version 3.7.2 includes several updates intended to improve the end-user experience as well as admin-specific features which further C2’s benefit to IT administrators. Despite the added features, we continue to innovate with an eye toward intuitive interfaces and controls. In C2’s case, added benefit does not equal added complexity.

On to the improvements!  First, the big one: C2 now ships and supports an OVA template that enables IT administrators to rapidly deploy C2 in XenServer. We’ve offered this capability on VMware’s vCenter since the beginning, but now C2 offers the same time-to-value for those that have selected a different virtualization manager. 

In larger deployments, the administrator view of the C2 Job List can be quite long. To help with this, we’ve added both a better UI to view job statistics, but also the ability to filter the job list based on job type.

Sticking with this theme, we also made significant improvements to the page C2 Administrators use to control Server Build Defaults. Administrators now have an easier interaction to add new, or change existing default settings for the parameters required by the underlying virtualization manager.

C2 Administrators have long had the ability to set resource group quotas in C2. In this latest release, we’ve made point improvements to how this system operates behind-the-scenes. The result is a more robust quota framework that ensures C2 Administrators are continually able to rely on C2 to effectively enforce resource limits set on the group level.

Several customers have asked that C2 provide them the ability to automatically test the network connection for a provisioned server. We were happy to oblige. C2 can now ping the server’s IP to ensure the network was set up correctly before completing the provisioning job.

Lastly, on the C2 Administrator side, we’ve made some back-end Apache configuration tweaks and changes to further improve the UI performance.

Of course, we didn’t forget about you IT consumers out there!  C2 Administrators have been able to modify the ordering process to match the end user’s level of ability and understanding. In version 3.7.2, C2 Administrators are now able to add tool tips that will be displayed in the ordering process. These tool tips give C2 Administrators a way to explain required inputs to end users in a clear and concise manner. In this case, as always, the C2 interface is modified from within the C2 interface. No need for costly developers, or SDK licenses.

Happy provisioning with the well-oiled machine that is CloudBolt Command and Control!

Read More

Topics: Feature, VMware, Release Notes, vCenter

The C2 Cloud Manager Value Play: IT in a Business Context

Posted by Justin Nemmers

5/13/13 12:53 PM

 car fleet cloud manager CFO and CTO

The march toward simplicity in technology and data centers is one that grows more difficult with every technical innovation that occurs. For years, CIOs and IT managers have maintained that standardization on a select provider’s toolset will help simplify their IT enterprise. “Standardize!  Reduced fragmentation will set you free,” the typical IT vendors will shout. However, reality is just not that simple. I’ve made some other cases for why the mentality of strict standardization isn’t necessarily all it’s cracked up to be, but I’m going to take a different approach this time. 

One problem that I hear pretty frequently when talking to customers’ non-IT leadership and management is that they frequently lament that their IT organizations just don’t understand how the business actually needs to; not just consume IT, but also track and measure various metrics from an IT organization in ways that make sense to the business.

Let’s look at this a bit more practically for a moment. I used this analogy with a CFO last week, and it resonated well in describing the real issue that the non-IT leadership types have with IT as a whole.

In a large pharmaceutical company, there is a fleet of company-owned cars. A recall is needed on one year of a particular model because of poor paint quality. Upon learning that information, the fleet manager can not only tell you exactly how many of those cars she has in her fleet, but also tell you exactly who each car is assigned to, which ones are green, and the home address of that car. The fleet manager is able to present information about her part of the business in a way that makes sense to management. How is it that IT does not operate with the same level of intelligence?

Now let’s apply the same thought process to IT. The CFO wants to know what percentage of the IT budget is being used by a particular project. Enter the IT organization. The real numbers behind the CFO’s request are daunting. The IT organization is juggling thousands of VMs, different licensing models and costs for software, different hardware, multiple data center locations, and a convoluted org chart, just to name a few. Different environments have different cost structures, and therefore add complexity to reporting because of the requirement to understand not just what a VM is running, but where it is running.

And that’s a relatively uncomplicated example. What happens when you start to add things like applications, software licenses, configuration management tools (HP SA! Puppet! Chef! Salt Stack!), multiple data centers, differing virtualization technologies (VMware! Xen! KVM!), multiple versions of the same technology, multiple project teams accessing shared resources, multiple Amazon web services public cloud accounts, etc.

From a seemingly simple request, we have revealed the main frustration that the non-IT leadership faces nearly every time they have a seemingly simple request. At core to the problem is that the IT processes and technologies were not built in a way to provide this transparency. Instead, technologies such as virtualization, cloud, networking, etc. were designed and implemented to provide high availability, and meet an SLA. They were not designed to offer reporting transparency, or cost accountability. The end result:  IT the Business cannot understand IT, and vice versa.

The good news is that the capabilities needed to resolve this imbalance are present today. When implemented in an environment, CloudBolt enables IT managers to answer the questions their non-IT leadership is asking. “CloudBolt enables IT in a Business Context”. CloudBolt C2 solves more than just the problems that CIO, CTO, and IT Directors and Managers have. For the first time, C2 enables the non-tech leadership to view IT in a way that’s analogous to how they look at any other portion of their business, which is both good for business, and IT. 

It’s time for IT in a Business Context. It’s time for Business-Driven IT.

Take a look at our Benefits Overview, and see how we can make a difference today.

Read More

Topics: IT Challenges, Enterprise, Business Challenges, IT Organization, Vendors

What is Plain Old Virtualization, Anyway? Not Cloud, That's What.

Posted by Justin Nemmers

5/6/13 3:57 PM

it organization underwater overloaded

I speak with a lot of customers. For the most part, many understand that virtualization is not actually “Cloud”, but rather an underpinning technology that makes cloud (at least in terms of IaaS and PaaS) possible. There are many things that a heavily virtualized environment needs in order to become cloud, but one thing is for certain: “Plain Old Virtualization” needs to learn a lot of new tricks in order to effectively solve the issues facing today’s IT organizations.

Many of those same organizations find themselves constantly underwater when it comes to the expectations from the business they’re tasked with supporting. The business wants X, the IT organization has X-Y resources. Cloud is an important tool that will help narrow this gap, but IT organizations need the right tools to make it happen.

Virtualization Alone is no Longer Sufficient

At plain old virtualization’s core is the virtualization manager. Whether it’s vCenter or XenServer, or some other tool, plain old virtualization lacks the necessary extensibility to get organizations to cloud. Even virtualization managers that have added some capabilities like a self-service portal or metered usage accounting are fundamentally just a hypervisor manager, and typically they only focus on their own virtualization technology.

Plain old virtualization doesn’t understand your business, either. It is devoid of any notion of user or group resource ownership, and lacks the flexibility needed to layer in the organizational structure into the IT environment. Instead of presenting various IT consumption options, plain old virtualization tells an organization how it needs to consume IT—in other words, IT Administrators have to get involved, chargeback isn’t possible, and the technology has little if any understanding of organizational or business structure. 

Plain old virtualization is a solved problem. The value proposition for virtualization is well understood, and accepted in nearly every cross-section of IT. Virtualization managers have matured to enable additional features like high availability, clustering, and live migration, which have allowed IT organizations to remove some unneeded complexity from their stacks.

Failings of Plain Old Virtualization Managers

Many vendors that offer perfectly good plain old virtualization managers are in a process of metamorphosis. They’re adjusting their products, acquiring other technologies, and generally updating and tweaking their virtualization managers so the vendors can claim they “enable cloud”. Whether the new capabilities are added as layered products that are components of a (much) larger solution suite, or merely folding those capabilities into an ever-expanding virtualization manager, the result is a virtualization manager that tries to be more than it is. The customer ultimately pays the price for that added complexity, and often, experiences increased vendor lock-in.

One of the many promises of cloud is that it frees IT organizations to make the most appropriate technology decisions for the business. This is where plain old virtualization that is trying to be cloud really gets an IT organization in trouble. Often, the capabilities presented by these solutions are not sufficient to solve actual IT issues, and the effort to migrate away from those choices is deemed too costly for IT organizations to effectively achieve without significant re-engineering or technology replacement.

CloudBolt Effectively Enables Cloud From Your Virtualization

The good news is that IT organizations don’t need to do entire reboots of existing tech in order to enable cloud in their environments. CloudBolt C2 works in conjunction with existing virtualization managers, allowing IT organizations to present resources to consumers in ways that make sense to both business and consumer alike. C2 does not require organizations to replace their existing virtualization managers; instead, it provides better management and a fully functional and interactive self-service portal so IT consumers can request servers and resources natively.

Flexibility in the management layer is critical, and one place where plain old virtualization tools fall down pretty regularly. C2 is a tool that effectively maps how your IT is consumed to how your business is organized. Try to avoid inflexible tools that offer your IT organization little choice now and even less going forward into the future.

Read More

Topics: IT Challenges, Management, Virtualization, IT Organization

CloudBolt Releases v3.7.1 of C2- the Next Generation Cloud Manager

Posted by Justin Nemmers

4/29/13 8:12 AM

upgrade wrench 72

We continue to innovate C2, the next-generation cloud manager that provides the easiest self-service IT portal available on the market.

The development team has been hard at work, and is proud to announce the release of CloudBolt C2 v3.7.1. We've focused this release on feature enhancements that provide more information to the end user, as well as more administrative control over certain aspects of provisioning through C2 and managing C2 resources.

For end users and administrators alike, we've added a progress bar to the order details page. As jobs are executed, the progress bar will let the requestor know what a provisioning job’s progress is. As C2 allows for multi-server and multi-environment orders, this progress bar works equally well in single or multi-server orders.

When ordering a virtual machine, the requestor can now see the quota impact of the request in the ordering window. We have found that making this information visible to the end user results in more efficient request use of resources.

Administrators also have a new UI for managing CloudBolt Resource Pools. CloudBolt Resource Pools allow Administrators to create pools of IP addresses, VNC ports, and other parameters that would normally require manual input. For example, an administrator can assign a pool of IP addresses to C2, which will then automatically select and mark as used a resource as needed. CloudBolt will not allow a resource’s re-use until the existing resource is released. This allows C2 to further automate previously manual processes.

Also benefitting administrators is a more efficient VM synchronization engine. C2 now ingests and synchronizes VM information from VMware more efficiently, which has resulted in a significant performance improvement. This means that changes made to VMs using the vSphere or vCenter interface are more rapidly detected by C2. 

In v3.7.1, we've made similar performance improvements to the AMI import process for the Amazon Web Services connector.

Our deep integration with HP Server Automation continues to remain a priority. In addition to officially supporting HP Server Automation v9.14, we now support the import and installation of HP Server Automation OS Build Plans through the C2 interface. If you haven’t tried unifying your virtualization with your data center automation/configuration management, you’re really missing out.

As you can see, we've been pretty busy, but this was just a dot release! We've got some great things cooking in our development shop that you won't want to miss.

Read More

Topics: Feature, Upgrade, Release Notes

CloudBolt Releases C2 v3.7.0

Posted by Justin Nemmers

4/8/13 9:03 AM

On behalf of the entire team here at CloudBolt, I’m excited to announce the release of CloudBolt C2 version 3.7.0. 

We continue to pull out all of the development stops in CloudBolt C2.  This latest version adds numerous improvements, and new features to help reduce the burden of IT and cloud management.

C2 updates in v3.7.0 make AWS easier

Amazon Web Services (AWS) support continues to strengthen.  Now, CloudBolt C2 will auto-create region-specific environments based on administrator-selected regions for EC2.  C2 will also list out the various AMIs admins want to make accessible to their users.  The best part is that this will work for any EC2-provided AMI, as well as customer-specific AMIs.  C2 also now provides for richer discover of running instances in EC2, so the server list and individual server views in C2 contain even more information about the related EC2 instance.  Keep your eyes peeled, because we’re going to continue adding capabilities to the AWS connector.

We believe that Network Virtualization from folks like Nicira by VMware will drastically change how IT organizations manage enterprises.  Our engineers have now enabled Network Virtualization support in the KVM connector, meaning administrators can now create and deploy virtualized networks on KVM-backed hosts using CloudBolt C2. 

One powerful aspect of CloudBolt C2 is that is it’s ability to apply actions to systems cross-environment.  Part of what was needed here was a multi-select capability that allowed users to multi-select where it makes sense to do so.  C2 now supports multi-select in the appropriate dialog boxes.

Provisioning instances is what CloudBolt C2 was built to do. When a user has no knowledge of what’s going on behind the scenes, it’s good practice to at least let them know that something is happening.  With 3.7.0, C2 does a better job showing users the provisioning progress of their instance.

Read More

Topics: Innovation, Feature, Cloud Manager, Upgrade, Release Notes, AWS

Why Manual VM Provisioning Workflows Don't Work Anymore

Posted by Justin Nemmers

3/25/13 10:40 AM

Let’s look through a fictional situation that likely hits a little close to home.

An enterprise IT shop receives a developer request for a new server resource that is needed for testing. Unfortunately, the developer request doesn’t include all of the information needed to provision the server, so the IT administrator goes back to the developer, and has a discussion about the number of CPUs, and the amount of RAM and storage are needed. That email back-and-forth takes a day. Once that conversation is complete, the IT admin creates a ticket and begins the largely manual workflow of provisioning a server. First, the ticket is assigned to the storage team to create the required storage unit. The ticket is addressed in two days, and then passed on to the network team who ensures that the proper VLANs are created and accessible, and to assign IP addresses. The network team has been pretty busy, though, so their average turnaround is greater than four days. Then, the ticket is handed off back to the virtualization team, where the instance is provisioned, but not until two days later. Think it’s ready to hand off to the user yet?  Not yet.

manual provisioning, workflows, old-school assembly lineAn assembly-line model cannot deploy VMs as rapidly as needed. Automation is required.

The team that manages the virtual environment and creates the VMs is not responsible for installing software. The ticket is forwarded along to the software team, who, three days later, manually installs the needed software on that system, and verifies operation. The virtual server is still not ready to hand off to the developer, though!

You see, there’s also a security and compliance team as well, so the ticket gets handed off to those folks, who a few days later, run a bunch of scans and compliance tests. Now that the virtual resource is in it’s final configuration, it’s got to be ready, right?  Nope. It gets handed off to the configuration management team who then must thoroughly scan the system in order to create a configuration instance in the Configuration Management Database (CMDB). Finally, the instance is finally ready to be delivered to the developer that requested.

The tally is just shy of three full business weeks. What has the developer been doing in the meantime?  Probably not working to his or her full capacity.

Circumventing IT Completely with Shadow IT

Or, maybe that developer got tired of waiting, and after two days went around the entire IT team and ordered an instance from AWS that took five minutes to provision. The developer was so excited about getting a resource that quickly that they bragged to the fellow developers, who in turn start to use AWS.

Negative Effects on IT and the Business

Either way, this is a scenario that plays out repeatedly, and I’m amazed at how frequently it plays out just like this. The result might initially appear to just be some shadow IT, or maybe some VM sprawl from unused deployed instances, however, the potential damage to both the IT organization and the business is far greater.

First, users frequently circumventing the IT organization looks bad. These are actions that question the IT organization’s ability to effectively serve the business, and thus strike at the very heart of the IT group’s relevance.

Furthermore, the IT Consumers are the business. Ensuring that users have access to resources in near-real time should be a goal of every IT org, but rapidly adjusting and transforming the IT teams and processes doesn’t work as quickly as demand changes. This means that the IT org cannot respond with enough agility to continually satisfy the business needs, which in turn potentially means more money is spent to provide less benefit, or even worse, the business misses out on key opportunities.

IT shops need to move beyond simple virtualization and virtualization management. Why? Improved virtualization management does not solve all of the problems presented in the scenario above, while (and this is key here) also providing for continued growth. Implementing tools that only manage virtualization only solve part of the problem, because they are unable to properly unify the provisioning process around software (by going beyond plain template libraries with tools like HPSA, Puppet or Chef), and other external mechanisms (like a CMDB). In order to fully modernize and adapt existing processes and teams to a cloud/service oriented business model, all aspects of the provisioning process must be automated. It’s the only way an IT organization can hope to stay responsive enough, and avoid being locked into one particular solution, such as a single-vendor approach to virtualization. A well-designed and implemented Cloud Manager will give an IT org the freedom to choose the best underlying technology for the job, without regard for how it will be presented to the end user.

Either way you look at it, IT organizations need a solution which will allow them to utilize as much of their existing assets as possible while still providing the governance, security, and serviceability needed to ensure the company’s data and services are well secured and properly supported.

The Solution

Thankfully, there’s just such a Cloud Manager. CloudBolt C2 is built by a team with decades of combined experience in the systems management space, and was created from the beginning to solve this exact problem. Because we started from the first line of code to solve this entire problem, we call ourselves the next-generation cloud manager, but out customers call it a game changer. Give it a download and effortless install today, and we’ll show you that CloudBolt C2 mean business.

Read More

Topics: Customer, IT Challenges, Management, Virtualization, Cloud Manager, Shadow IT, Agility

Build private cloud on top of virtualized network

Posted by Justin Nemmers

3/18/13 11:40 AM

Let’s face it. Networks are a pain to implement, maintain, and debug. Additionally, they’re often viewed as fragile enough that many teams generally wish to avoid routinely poking at them by messing with configurations or frequently creating/deleting VLANs.

Implementing a flexible and scalable private cloud environment on an inflexible network will only serve to reduce the flexibility and scalability of a private cloud environment that needs to grow.  In addition, ongoing management of these environments can quickly become difficult when administrators don’t have the ability to easily restrict network access by group, or have the ability to rapidly create new stand-alone networks for a specific application, group, or requirement.

virtualized networking separates logical from physical
Separate the logical from the phisical network.  Network virtualization does for networks what server virtualziation did for servers. You can't talk virtualization management without also talking about network virtualization management.

Enter network virtualization!  When implemented in your environment, and made consumable by a Cloud Manager, network virtualization suddenly breaks the network stack wide open.  In fact, I’d argue that until you virtualize the network, even private cloud alone is only partly useful.  Why?  Well, for several reasons:

  • Private clouds alone are limited by their ability to meet capacity demands. 
  • Eventually, that private cloud will run out of data center space, or will need to otherwise expand out of it’s shell. 
  • Whether your private cloud is fully on-prem, or you’re using a virtual private cloud model from someone like Amazon Web Services (AWS), the inflexibility of unifying that networking layer can be a difficult hurdle to surmount. 

Let’s expand on this AWS example.  Amazon offers a Virtual Private Cloud (VPC) that is essentially a private cloud hosted in the public cloud. Confused yet?  Don’t be. AWS uses advanced network and security parameters to effectively cordon off your cloud-based VMs from other tenants, allowing for secure communication and private networking in your hosted private cloud. They do this by manipulating the network layers in the hypervisors. AWS’ use of networking, although advanced, has its limitations, though. For instance, although VPCs can span availability zones, separate regions may require separate VPC definitions, leaving the networking integration to the user. In those cases, your local facility will have to implement it’s own routes to properly send traffic to the correct VPC. Although you can certainly work through those limitations, a hosted private cloud like that is wholly dependent on AWS. 

It doesn’t get any easier when your private cloud is completely on-prem. Be it demand growth, or a shift in requirements or priorities, networking is likely to be one of the significant bottlenecks in the growth and success of your private cloud.  

This is why a technology like network virtualization is so important. Implementing network virtualization in a private cloud environment (be it greenfield, or layered into an existing brownfield environment) allows you to approach new requirements with flexibility in mind and little concern over the networking infrastructure. Just make sure that your underlying network has the Layer 2 capacity for required traffic, and then start to build your environment above that.

In order to attain the flexibility of network virtualization on top of your private cloud, you need effective management. This goes beyond creating a handful of networks and handing them over to users.  Understanding what networks are required by which users and groups, and then ensuring that access is properly controlled is more than critical: it’s a requirement that must be met, or the network will remain a significant impedance to growth. Especially when it is time to expand the reach of your private cloud—whether that be adding capacity, layering in additional technologies, or perhaps looking to securely and safely make use of public cloud resources (congrats, you now have a hybrid cloud!)—Management of the entire stack is an imperative part of the solution. Deploy applications, resources, and networks all in one pass, no matter the environment. That’s the promise of network virtualization. CloudBolt makes it usable.

Read More

Topics: Network Virtualization, Software Defined Network, Management, Implementation, AWS