CloudBolt Software Logo

CloudBolt Blog

Private/Public Cloud to Cloud Migration: VM Redeployment vs. Migration

Posted by Justin Nemmers

10/7/14 12:06 PM

We get the question all the time… “Can CloudBolt move my VMs from my private cloud to Amazon... or from Amazon to Azure?"

The answer is the same. “Sure, but how much time do you have?”

Cloud-based infrastructures are revolutionizing how enterprises design and deploy workloads, enabling customers to better mange costs across a variety of needs. Often-requested capabilities like VM migration (or as VMware likes to call it, vMotion) are taken for granted, and increasingly customers are interested in extending these once on-prem-only features to help them move workloads from one cloud to another.

fiber_cables_72

At face value, this seems like a great idea. Why wouldn’t I want to be able to migrate my existing VMs from my on-prem virtualization direct to a public cloud provider?

For starters, it’ll take a really long time.

VM Migration to the Cloud

Migration is the physical relocation (probably a better term) of a VM and it’s data from one environment to another. Migrating an existing VM to the cloud requires:

  1. Copying of every block of storage associated to a VM.
  2. Updating the VM’s network info to work in the new environment.
  3. Lots and lots of time and bandwidth (See #1).

Let’s assume for a minute that you’re only interested in migrating one application from your local VMware infrastructure to Amazon. That application is made up of 5 VMs, each with a 50GiB virtual hard disk. That’s 250 GiB of data that needs to be moved over the wire. (Even if you assume some compression, you will see below how we're still dealing with some large numbers).

At this point, there is only one question that matters: how fast is your network connection?

Transfer Size (GiB)

Upload speed (Mb/s)

Upload Speed (MB/s)

Transfer Time (Seconds)

Transfer Time (Hours)

Time required (Days)

250

1.5

0.1875

10,922,667

3,034.07

126.42

250

10

1.25

16,38,400

455.11

18.96

250

100

12.5

163,840

45.51

1.90

250

250

31.25

65,536

18.20

0.76

250

500

62.5

32,768

9.10

0.38

250

1000

125

16,384

4.55

0.19

250

10000

1250

1,638

0.46

0.02

The result from this chart is clear: the upload speed of your Internet connection is the only thing that matters. And don’t forget that cloud providers frequently charge you for that bandwidth, so your actual cost of transfer will only be limited by how much data you’d like to upload. 

Have more data to migrate? Then you need more bandwidth, more time, or both.

If you want to do this for your entire environment, note that you’re effectively performing SAN mirroring. The same rules of physics apply, and while you can load a mirrored rack of storage on a truck and ship it to your DR site, most public cloud providers won’t line up to accept your gear.

The Atomic Unit of IT is Workload, Not the VM

When customers ask me about migrating VMs, they typically want to run the same workload in a different environment—either for redundancy, or best-fit, etc. If it’s the workload that’s important, why migrate the entire VM?

Componentizing the workload can take work, but automating the application deployment with tools such as Puppet, Chef, or Ansible will make it much easier to deploy that workload into a supported environment.

Redeployment, Not Relocation

If migrating whole stacks of VMs to the cloud isn’t practical, how does an IT organization more effectively redeploy workloads to alternate environments?

Workload redeployment requires a few things:

  1. Mutually required data must be available (i.e. database, etc.);
  2. A configuration management framework available to each desired location, or
  3. Pre-built templates that have all required components pre-installed.

I won’t spend the time here talking through all of these points in detail, but I will say that any of these options requires effort. Whether you’re working to componentize and automate application deployment and management in a CM/automation tool, or re-creating your base OS image and requirements in various cloud providers, you’re going to spend some time getting the pieces in place.

A possible alternative to VM migration is to deploy new workloads in two places simultaneously, and then ensure that needed data and resources are mirrored between the two environments.  In other words, double your costs, and incur the same challenges with data syncing. This approach likely only makes sense for the most critical of production workloads, not the standard developer.

Ultimately, Know Thy Requirements

It seems as though the concept of cloud has caused some people to forget physics. Although migrating/relocating existing VMs to a public cloud provider is an interesting concept, the bandwidth required to effectively accomplish this is either very expensive, or simply not available. Furthermore, VM migration to a public cloud assumes that the performance and availability characteristics of the public cloud provider are the same or better than your on-prem environment… which is a pretty big assumption.

While there are some interesting technologies that are helping with this overall migration event, customers still need to do the legwork to properly configure target environments and networks, not to mention determine which workloads can be effectively moved in the first place. Technology alone cannot replace sound judgment and decision making, and the cloud alone will not solve all of your enterprise IT problems.

And don’t forget that IT governance in the public cloud is much more important than it is in your on-prem environment, because your end users are unlikely to generate large cost overruns when deploying locally. If you don’t control their access to the public cloud, you will eventually get a very rude awakening when you get that next bill.

Want Some Help?

So how does CloudBolt actually satisfy this need? We focus on redeployment and governance. One application, as provided by a CM tool, can be deployed to any target environment. CloudBolt then allows you to define multi-tiered application stacks that can be deployed to any capable target environment. Your users and groups are granted the ability to provision specific workloads/applications into the appropriate target environments, and networks. And strong lifecycle management and governance ensures that your next public cloud provider bill won’t break the bank.

Want to try it now? Let us set you up a no-strings-attached demo environment today.

Schedule a Demo  or try it yourself

Read More

Topics: Network, Cloud, Challenges

IT Organizations Want Cloud, but Need IT Self Service. Here's Why.

Posted by Justin Nemmers

10/30/13 9:20 AM

Most end users of IT Organization services have one thing in common: They just want access to the resources they need. They don’t like waiting, and frankly, the more you as an IT organization make them wait, the more likely they are to just go around you and create a nice little shadow IT environment. And even if they don't branch off on their own, they're likely to let you and others know they're not happy.

egnMy lovely daughter reminds me of some users I've worked with. They'll definitely let you know when they are not happy, and you'll probably come to regret whatever it was you did to piss them off.

In my travels, it seems to me that most IT organizations get—at least at some level—that they need to improve the level of service to the end user, but from there, they tend to lose their way about how to actually make that happen.
Path to the Cloud?

IT’s typical response to the end user pain is almost universally “Cloud!” This is great, except for the fact that the term “Cloud” has moved from maximum hype to beyond meaningless. The core issue is that once the concept of building a “Cloud” comes into the conversation, IT organizations invariably start long and convoluted planning processes about how they’ll be re-engineering the entire environment, what they need to buy, and the services they need to implement it. Oh, not to mention fabricating what all of the other Cloud requirements are. There will be negotiations and there will be fiefdoms resistant to change. There will be arguments and disagreements.

IT will finally reach an agreement, and look to begin implementing a large solution stack that will take lots of contracting, money, and professional services to implement. Many months later in the most agile organizations, IT will have something that they can show to the end user.

All the while, end users have been patiently waiting. Even if they’ve been involved in the Cloud planning process, with little to no improvement after months, they wonder how a seemingly simple request of “we just want our resources now instead of later” turned into such a massive engineering effort. 

Tactical Quick-Win

I’ve written before about how IT and Business speak different languages. The end users want IT Self Service. The IT Organization takes that requirement and rolls it into a larger cloud strategy, delaying and over-complicating a simple need.

IT Self Service is at the core of a cloud-enabled organization. What IT fails to understand is that there is significant value in providing a tactical quick-win capability to end users in need. IT values not having to replicate a bunch of work by implementing a tool that can’t grow and mature as their cloud adoption strategy takes shape. End users just want IT Self Service. IT needs and very much wants to ensure that deployed VMs and applications are governed and backed by policy that ensures they’re secure, effectively tracked, and accountable to specific users and groups. And once again, end users just want IT Self Service. IT wants to ensure they remain in control of their environment. After all, their jobs depend on it.

Just in case you haven’t picked up on my theme yet, end users don’t care a lick about Cloud. They just want to be able to get near-immediate access to resources they need to get their jobs done. Cloud to them could mean any one of a thousand different things—most of which are meaningless in the realm of IT.

In the hundreds of customer conversations that I’ve had since I started with CloudBolt, one theme is pretty common: many IT teams think that in order to be successful—and relevant—IT Self Service alone is insufficient. Successful IT organizations, however, share something in common: They know that IT Self Service isn’t just important, it’s everything when it comes to improving the interaction with end users.

Goal: Positively Impact Users

IT Organizations that are embroiled in a long and complicated Cloud strategy and implementation cycle must take steps to rapidly improve the level of service to lines of business. A tactically focused implementation of an IT Self Service software tool that offers immediate benefit to the end users and lines of business will go far to placate angry and disillusioned end users. This quick win helps to keep the IT Organization as a whole relevant. It is certainly important that your IT Self Service solution be not only quick to install but have expanded capabilities as you seek to broaden your organization’s approach to ‘Cloud’ but that immediate response to your IT consumers is paramount to the IT group remaining relevant to the overall organization.

Our CEO John Menkart previously wrote about how IT Organizations need to mature to become broker/providers of resources. The subtext of this is that it is the IT organization that decides who can run what, and where they can run it. End users again have little concern about where something is deployed; just that it is rapidly deployable, and meets performance, access, and (occasionally) cost metrics.

IT Organizations can also enable public cloud capabilities into their IT Self Service Portal. To that avail, IT Orgs with capable IT Self Service portals are much closer to hybrid cloud than they likely think.

Expanding on the normal wins from IT Self Service, controlled IT Self Service offers the needed level of policy-backed automated IT Self Service provisioning, while also ensuring that IT, process, and procedure is always followed. Reporting enables IT organizations to provide critical metrics back to lines of business in ways that have not previously been discoverable or reportable. Lines of business need this visibility and have in most cases been frustrated by the inability for IT to provide this for some time. (Yet another big win for the business.).

The scenario gets even better when the IT Organization realized that a good Controlled IT Self Service Portal will actually afford them the control that they need, and offer a quick-win “your-life-is-getting-better now” solution to end users.

So sit back, and think about what your goals with Cloud are, vs. what they should be. Then give us a call. We'd love to demo the C2 Controlled IT Self Service portal for you.

Schedule a Demonstration

Read More

Topics: Challenges, People, IT Self Service

Automation of the Trinity: Virtualization, Network, and Security

Posted by Justin Nemmers

6/28/13 3:51 PM

Danelle Au wrote an exellent article for SecurityWeek that is essentially a case study for why organizations need CloudBolt C2 in their environments. She talks about how, at scale, the only way to achieve the needed environment security is with significant automation, making the key point that “automation and orchestration is no longer a ‘nice to have.’” 

IT Security, firewall, automation

Yep. It’s a must. A requirement.

In her description of a manual provisioning process, Danelle accurately points out that there are numerous variables that need to be accounted for throughout the process, and that one-off choices, combined with human error can often open up organizations to broader security issues.

In order to achieve the “trinity” (as Danelle calls it) of “virtualization, networking and security”, a tool must have domain knowledge of each of the separate toolsets that control those aspects. Tools like vCenter, RHEV, or Xen handle Virtualization Management (just to name a few). Each of those tools also has some level of their own networking administration and management, but a customer might also be looking to implement Software Defined Networking that’s totally separate from the virtualization provider. So now couple Virtualization Management with a tool such as Nicira, or perhaps Big Switch Networks, and the picture only grows more complicated.

Security, the last pillar of this trinity, is really the most difficult, but absolutely the one that benefits not just from automation, but also strict permissions on who can deploy what to where on what network. Automation might be able to grasp the “deploy a VM onto this network when I press this button” concept, but you need something quite a bit smarter when you take a deeper look at the security impacts of not just applications, but which systems they can be deployed on, in which environments.

So how do you expect admins to juggle this, with 1,000 different templates covering all the permutations of application installs in the virt manager? It’s probably not sustainable, even with a well-automated environment.

What is an admin to do? Well, for starters, admins use Data Center automation/Configuration Management tools like Puppet, Chef, HP Server Automation, GroundWorks, and AnsibleWorks to name a few. But in order to fully satisfy the security requirement, those applications and tools must also be fully incorporated into the automation environment. And then governed, to make sure that the production version of application X (which potentially has access to production data) can never be deployed by a QA admin into the test environment. An effective automation tool must be able to natively integrate with the CM as well, otherwise

And Denelle’s point of view was largely from the private cloud. What happens when it’s private cloudS, not cloud? And let’s not forget about AWS and their compatriots. Adding multiple destinations and target environments can drastically increase the complexity.

I do, however, have one glaringly huge issue with one of her comments: “It may not be sexy…” I happen to think that “The ability to translate complex business and organization goals” is more than a little sexy. It is IT nirvana.

Read More

Topics: Software Defined Network, Challenges, Automation

7 Takeaways From the Red Hat Summit

Posted by Justin Nemmers

6/19/13 8:27 AM

CloudBolt Booth Red Hat Summit Boston John Menkart Justin Nemmers Colin Thorp Jesse NewellPart of the CloudBolt team at Red Hat Summit 2013.  Sales Director Milan Hemrajani took the picture.

A few sales folks and I have returned from a successful Red Hat Summit in Boston, MA. With over 4,000 attendees, we were able to leverage an excellent booth position to talk to many hundreds of people. One of the things that I love about my role here at CloudBolt is that I am constantly learning. I particularly enjoy speaking with customers about the types of problems they run across in their IT environments, and I take every chance I can to learn more about what their IT challenges are. Some of these are common themes that we hear a lot here at CloudBolt, and a few were a bit surprising as some organizations are still in earlier stages of their modernization efforts that I would have expected.

  1. Not everyone has heavily virtualized his or her enterprise.
    Sure, there are some environments where virtualization doesn’t make a lot of sense—such as parallelized, but tightly CPU-bound workloads, or HPC environments. But what surprised me were the number of organizations I spoke with that made little, or highly limited use of virtualization in the data center. It’s not that they didn’t see the value of it, more often than not, they still made use of Solaris on SPARC, or had old-school management that had not yet bought into the idea that running production workloads on virtualized serves has been a long-accepted common practice. For these folks and others, I’d like to introduce you to a topic I like to call “Cloud by Consolidation” (in a later blog post).
     
  2. Best-of-Breed is back.
    Organizations are tired of being forced to use a particular technology just because it came with another product, or because it comes from a preferred vendor. For example, an IT organization is pressed to use sub-optimal technology because it came with another suite of products. Forcing an ill-fitting product on a problem often results in longer implementation times, which consume more team resources over just implementing the right technology for the problem at hand. Your mechanic will confirm that the right tool makes any job easier. It’s not any different with enterprise software.
    • The gap between things like CloudForms (Formerly ManageIQ) it’s ability to manage OpenStack implementations
    • Nicira Software Defined Networking and the ability to manage it with vCloud Automation Center (vCAC, formerly DynamicOps)Either way, customers are tired of waiting as a result of vendor lock-in.
       
  3. Customers are demanding reduced vendor lock-in.
    IT organizations have a broad range of technologies in their data centers. They need a cloud manager that has the capabilities to effectively manage, not just what they have installed today, but what they want to install tomorrow. For example, a customer might have VMware vCenter today, but is actively looking at moving more capacity to AWS. Alternatively, they have one data center automation tool, and are looking to move to another (see my next point below, #4). Another scenario is not having to wait for disruptive technology to be better supported before getting to implement and test it in your own environment—while being managed with existing technology. Good examples:
     
  4. Customers are increasingly implementing multiple Data Center Automation (DCA) tools. 
    This is a bit interesting in the sense it used to be that an IT organization would purchase a single DCA technology and implement it enterprise-wide. I was surprised to hear the number of customers that were actively looking at a multiple DCA strategy in their environments. Our booth visitors reported that they primarily used HP Server Automation, and to a lesser extent BMC’s BladeLogic. Puppet and Chef were popular tools that organizations are implementing in growth or new environments—like new public cloud environments. Either way, these customers see the definitive value in using CloudBolt C2 to present DCA-specific capabilities to end users, significantly increasing the power of user self-service IT while at the same time decreasing complexity in the environment.
     
  5. Lots of people are talking about OpenStack. Few are using it.
    For every 10 customers that said they were looking at OpenStack, 10 said they were not yet using it. There’s certainly been an impressive level of buzz around OpenStack, but we haven’t seen a significant number of customers that have actively installed and are attempting to use it in their environments. I think that Red Hat’s formal entry into this space will help this, because they have a proven track record of taming the seemingly untamable mix of rapidly-changing open source projects into something that’s supportable in the enterprise. I have no doubt that Red Hat will be able to tame this into something usable. This does not, however, mean that customers will be making wholesale moves from their existing (and largely VMware-based) virtualization platforms to OpenStack. Furthermore, there are still significant market confusion in regards to what Red Hat is selling. Is it RHEV? Is it OpenStack? Do I need both? These are all questions I heard more than once from Customers in Boston.
     
  6. Open and Open Source aren’t the same thing.
    I spent too many years at Red Hat to know that this is the case, but I feel it’s extremely important to mention it here. Many customers told us that they wanted open technologies—but in these cases, open meant tools and technologies that were flexible enough to interoperate with a lot of other technologies, and reduce overall vendor lock-in. Sure, an Open Source development model could be a plus, but the customers were most interested in their tech working, working well, and working quickly.
     
  7. Most IT Orgs want Chargeback, but few businesses are willing to accept it.
    Thus far, the only groups that I’ve chatted with whom actually use some chargeback mechanism are service providers that have external customers. Pretty much every other IT Organization seems to face significant pressure from the businesses they support against chargeback. Showback pricing helps counter this resistance, and over time should help more IT organizations win the battle over chargeback. IT Organizations should be leaping at the chance to collect and report on per-group or project cost reporting. It’s a critical piece of information that businesses need to make effective decisions. Business-Driven IT has been a necessary step in the evolution of IT for a long, long time. IT needs to get with the program and make visible to the business the types of information the business needs to make effective decisions. And on the flip side, the business needs to get with the program and accept that their teams and projects will have to be held responsible for their IT consumption.

So how do you get started recognizing the value of integrating IT with the business? Start here.

We’re looking forward to exhibiting at the next Red Hat Summit, which is slated to be held in San Francisco’s Moscone North and South exhibition center. And if you thought we made a big splash at this year’s summit…  Just wait to see what we have in the works!

Read More

Topics: Virtualization, Cloud, Enterprise, Red Hat, Challenges, Vendors

Have Your Cloud Vendors Spent Time in the Data Center?

Posted by Justin Nemmers

5/30/13 5:34 PM

data center vendor virtualization and cloud planning

As part of a project kickoff meeting yesterday, I walked through a massive data center in Northern VA. It’s the same one that houses huge portions of the Amazon Web Services’ us-east region, amongst nearly every other major ‘who’s who’ of the Internet age.

Of the various people that accompanied me on this tour, there were several that marveled at both the expansive magnitude, as well as the seemingly strict order of cages, racks, and hallways alike. Seeing this all through the eyes of folks that had not been in a data center before got me thinking about a simple question: “Has your vendor spent time in a data center?”

I pose this question both literally and figuratively. For enterprises, the data center is more than just a location. The data center encompasses not just a location, but business logic, processes, software, licenses, infrastructure, personnel, technology, and data. Saying something is “in the data center” imparts a certain gravity, meaning that a person has implied capability, responsibility, and knowledge. For a technology, being in the data center means that it’s likely a critical component of the business. By being “in the data center”, a technology has most likely met numerous standards for not just functionality to the business, but also reliability and security.

When it comes to IT environments, then, there are really two categories when it comes to the data center. Those technologies, people, and businesses that have experience working in one, and everyone and everything else. In no place is this notion more important, and true, than in Enterprise IT.

Innovation happens in the data center because of the unique problems encountered with IT at scale. If a vendor is not familiar with the types of issues organizations face at the data center scale, they’ll likely discover numerous limitations in capability and process alike. Furthermore; and a bit more insidious, is that the vendor doesn’t understand how IT organizations interact with and otherwise manage the data center environment in the first place.

Actual results may vary, but I’d venture that many solutions born in places other than the data center tend to cost more to implement, and have more thorny integration issues than promised—as a likely result of forcing the hand of IT organizations and business to effectively and wholly change their approach rather than just presenting a technology that fuses well with existing process, then presenting those same organizations with the choice of when and how to evolve.

IT organizations need solutions born in and for the data center. Looking to a team that has significant experience building, managing, selling to, and supporting the data center environment can be a significant benefit to IT organizations. Thankfully, CloudBolt is just one such company with substantial data center experience. This results in C2 being designed and built with the data center in mind. This has several effects:

  • For one, we’ll understand your actual problem, not the problem we, as a vendor, want you to have.
  • Two, we’ll be dis-inclined to wedge our product in places where it doesn’t fit well merely because we know what it takes to support a product in the enterprise, and we definitely don’t want to support an ill-fitting product in a data center.
  • Lastly, we’ll allow you to both implement the new tool, and continue to keep up business-as-usual. No sweeping, massive change required up front.

Collectively, our team has spent over 40 person-years in the data center. It shows in how we interact with customers, and it definitely shows in our product. 

Why not give our whitepaper a once-over, and then take C2 for a test drive?

Read More

Topics: Enterprise, Challenges, Vendors, Data Center

The 5 Cloud Management Vendor Categories: Where Does Your Vendor Fit?

Posted by John Menkart

4/8/13 3:51 PM

With Cloud Managers assuming such a critical role for IT groups, it is easy to understand why every existing IT vendor wants to supply a Cloud Manager that favors their core products in the roll out of an enterprise private/hybrid Cloud.

cloud manager chose wisely
Where does your Cloud Manager fit amonngst the available choices?

The Gartner Group has studied private/hybrid cloud management extensively and has summarized the space as having five (5) categories of vendors with ‘solutions’ for Cloud Management, as described by Gartner in their research note titled “Cloud Management Platform Vendor Landscape” Published 5 September 2012:

1) Traditional IT Operations Management Vendors

This segment of the Cloud Manager market includes vendors whose primary focus for management has been targeted at traditional physical and virtual infrastructures.  (BMC, HP SW, IBM, CA and others)

2) Infrastructure Stack Vendors

In this segment of the Cloud Management market are providers of the virtual infrastructure resources (Citrix Systems [Citrix], Microsoft, Oracle, Red Hat and VMware) —the hypervisor and basic virtualization management. While some of these vendors offer some multiplatform (hypervisor or OS) capability, their expertise and deep integration are for their own platforms.

3) Fabric Based Infrastructure Vendors

Most hardware infrastructure vendors offer cloud management software, which enables them to sell private and hybrid cloud solutions and not just the features and benefits of their hardware. (HP, IBM, Cisco, etc.). Think wholly contained racks of equipment that include storage, compute, network, and software, sold in pre-integrated chunks.

4) Open Source

These projects or vendors provide an open-source-based abstraction layer for resource management. They provide basic CMP functionality and generally provide a northbound API so that other vendors/independent software vendors (ISVs) can develop and build enriched CMP capabilities.

5) Best-of-Breed Point Solutions Vendors

The point solution Cloud Management vendors, which include mostly smaller Cloud Management companies, potentially are able to introduce innovation to the market. This is primarily because these vendors don't have legacy products that have to be integrated to build their solution.

 

Examining these categories some concerns about vendor motivations and the resulting limits placed on customers adopting some of these solutions arise.

The first three categories of vendors have a clear mission to maintain and advance the dependency that IT organizations have on their core technology.  A primary reason for adopting Cloud Management is enabling flexibility for future IT choices, yet selection of a Cloud Management solution from vendors in categories 1 through 3 have effect of restricting choice and flexibility for the customer due to biased technology support.

In order to gain full functionality from the offering, all the vendors in these three categories mandate use of a suite of software and/or hardware from the vendors’ own portfolio. These requirements will hamper the organization that adopts a Cloud Management solution.  Rather than being free over time to adopt new technologies like Network Virtualization, IT organizations will be limited to continuing to feed their ‘Cloud Management” vendor large sums of the IT budget for software and hardware, ensuring they are now ‘locked-in” as a result of a biased Cloud Management approach. These large vendors have a term for what they are trying to achieve with the customer. It’s ‘share of wallet’.  Any vendor looking for more share of your wallet is not going to make it easy or flexible for your enterprise to adopt products or technologies that they do not provide.

Gartner views the fourth category (open source) with promise noting: These solutions “provide basic Cloud Management functionality and generally provide a northbound API so that other vendors/independent software vendors (ISVs) can develop and build enriched CMP capabilities.”

The options in this category are tools like OpenStack, CloudStack and Eucalyptus.  The level of immaturity of the technologies in this space are the reason Gartner sees the need for an API so other more refined and mature Cloud Managers can abstract the users from these specific tools. By avoiding direct use of the cloud frameworks’ UI, the organization can use a more complete Cloud Manager to integrate the Cloud pilots undertaken with Open Source tools using only a broader cloud approach by the overall enterprise.

The additional concern with respect to these open source frameworks is that they are developed as a monolithic technology stack and bring unique technology such as server virtualization and configuration management. Rather than being truly vendor and technology agnostic, they represent a considerable integration effort and encourage costly rip and replace.

So that leaves only one category of Cloud Management vendor that doesn’t approach the IT organizations’ problem as an opportunity to ‘lock-in’ the customer, or is not too immature to deliver organizational value today.

The “Point Solutions” category is one where real unbiased solutions will be able to emerge. Like CloudBolt, other vendors in this category must deliver value in their own right. The products must account for heterogeneous resources in an IT environment and must stand on their own when considered as a solution.

The range of vendors offering independent solutions for cloud management is extensive and the solutions are diverse. From products limited to organizations using only virtualization to full-on enterprise offerings like CloudBolt Command & Control (C2) that cohesively manage and coordinate hardware provisioning, virtual servers, virtual networking, configuration and automation (HPSA, Chef, Puppet, etc.).

I am sure I speak for all the point solutions vendors when I suggest that; “selection of a Cloud Management solution must be made with eyes wide open with respect to each vendors’ desired outcome. Increased “Share of Wallet” is not a technical objective.  Only when your Cloud Management vendor is fully aligned with your organizations will you be able to deliver to the enterprise the desired technical and business flexibility.”

Want to learn more about CloudBolt C2? Download our Product Overview! 

Read More

Topics: Management, Cloud Manager, Gartner, John, Challenges, Vendors

Next-Generation IT and Greenfield Cloud Infrastructure

Posted by Justin Nemmers

3/12/13 3:06 PM

The problem is consistent. Consistently difficult, that is. As an IT manager, how does one implement new technology in an otherwise running and static environment?  New technology decisions are not just difficult, but the range of questions that rise from thinking about implementation plans can seem daunting. 

Whether you’re talking about switching hardware vendors, or implementing something relatively new like network virtualization, how it’s implemented in your environment will often be more critical to the project’s success than the validity of the technology itself. 

greenfield IT is great IT

Ideally, every environment would be brand new.  How many times have you asked yourself “Wouldn’t it be great if I could just scrap my current infrastructure and start over?”  Fundamentally, greenfield implementations like this are a good route to go for a number of reasons:

  • They allow you to select the best-of-breed and most effective technology to solve the problem at hand
  • You get the valuable opportunity to think about how the technology stack will scale in the future
  • They allow for rapid change while the environment is being built
  • Because there are few barriers, you have the opportunity to investigate other new and upcoming technologies, and you will have time to experiment 

A Cloud Manager provides significant value here.  Using one to unify the management of a lab environment allows the rapid integration of new technologies—technologies that your IT teams need to learn and gain experience with before implementing in the production environment.  Using a Cloud Manager eases the introduction of these technologies, and unified the management interface to make administration more predictable. These tools together help to mold processes and the IT organization into a more agile group.

In my mind, one of the core issues here is that too few IT teams are able to think outside of the box when it comes to implementing new tech. If greenfield implementations are easier than shoehorning new tech into your existing stack, why not give it a shot? Starting with a small base of gear and intelligently growing the installation over time is a great way to migrate capacity. I have an entire different blog post on how to migrate via attrition that is coming soon.  In the meantime, go ahead and identify a few pieces of hardware, install your preferred virtualization tool, download CloudBolt C2, and start piecing together your future architecture.  Once C2 is installed, you’ll be able to quickly layer in additional technologies like Data Center Automation, Network Virtualization, and even other virtualization or Public Cloud resources seamlessly.  

Happy integrating!

Read More

Topics: Virtualization, New Technology, Cloud Manager, Challenges, Implementation, Vendors, Development, Hardware