CloudBolt Software Logo

CloudBolt Blog

Justin Nemmers

Recent Posts

App and Cloud Management, added Nebula Private Cloud with v4.6

Posted by Justin Nemmers

10/27/14 2:22 PM

In our latest release, we've continued enabling IT organizations that want to provision and manage their applications more effectively. CloudBolt v4.6 makes providing self-service IT access to applications easier than ever, regardless of whether that application resides on a single server, or is a complete end-to-end stack of servers deployed across several environments. Like never before, users can interactively request entire stacks with just a few clicks, and deploy those stacks into any one of the dozen or so supported cloud and virtualization platforms.

We haven’t stopped there, though. In addition to streamlining the application provisioning process, we’ve put a significant amount of effort into other areas of CloudBolt as well. 

Nebula

We're also proud to announce an all-new connector for Nebula Private Cloud environments.

Image: CloudBolt now supports Nebula Private Cloud

Image: Add Nebula Resource Handler in CloudBolt Cloud Manager

With this new connector, CloudBolt customers gain the ability to deploy into and manage servers, applications, and even entire services in Nebula-backed environments. Nebula private cloud customers using CloudBolt gain immediate access to all of CloudBolt's features in both new and existing environments:

  • Chargeback and Showback
  • Reporting
  • Governance
  • Automated provisioning and management
  • Lifecycle management
  • Software license management
  • And more

Service Catalog

Customers are using the CloudBolt Service Catalog to provide end users self-service access to entire application stacks for some time now. In v4.6, we've updated the service creation process to make it even more straightforward. Just as they can do for the single server ordering process, admins can alter how the service ordering process looks for different end users and deployment environments. End users can be prompted to enter specific information as necessary based on their desired target deployment environment. 

Once ordered by a user, CloudBolt’s built-in approval mechanism can be leveraged for additional validation before CloudBolt steps through any number of automated processes required for delivery of a fully functional application stack.

The end result is clear: CloudBolt administrators can quickly create new service offerings that are able to span any supported target environment. Regardless of your platform of choice, CloudBolt can deliver a complete stack to your end users, and in less time than you think. 

Active Directory Group Mapping

Are you using one or more AD environments to authenticate CloudBolt users? In v4.6, admins gain the ability to map AD groups to CloudBolt groups. This AD group mapping also works with multiple AD environments, so if you're using CloudBolt in a multi-tenant capacity, you can still pick-and-choose how auth is handled for each tenant. 

Orchestration Hooks

Orchestration Hooks enable IT administrators to automate every step needed to deliver IT resources and applications to end-users. Extending on this capability, we've added a new Orchestration Hook type that enables the execution of an arbitrary remote script.

Image: Add a hook for remote script execution

This further extends CloudBolt’s lead as the most powerful cross-platform application deployment and management platform, as it can now be seamlessly integrated into nearly any manually-scripted provisioning and management process. Reusing your existing IP has never been easier or faster. 

Connector Improvements

Discovering and importing current state from existing environments is a CloudBolt Cloud Management key strength. In v4.6, this is even more thorough, as we now also detect all disk information from VMware vCenter virtual machines as well as AWS AMI, and Microsoft Azure public cloud instaces. 

Have a lot of VMs? Users in environments with tens of thousands of VMs will be happy to learn that VM discovery and sync is more efficient and faster. 

Puppet Enterprise users can now also leverage multiple Puppet environments rather than the default "Production". This can further help customers simplify their IT environments. 

Get It Now

The CloudBolt Cloud Manager v4.6 is available today via the CloudBolt support portal. Our updates are just another feature, and take mere minutes to complete

Don't have CloudBolt yet?

Schedule a Demo
or try it yourself
Read More

Topics: Public Cloud, VMware, Private Cloud, Upgrade, AWS, Puppet, azure, Nebula

Integrating Chef Enterprise with CloudBolt

Posted by Justin Nemmers

10/21/14 2:23 PM

Integrating Chef Configuration Management with CloudBolt enables IT Organizations to offer end users a broad selection of Chef-provided Roles, Cookbooks, and Recipes for self-service provisioning and management right from the CloudBolt UI and API. Users can deploy a single server and application, or entire server and application stacks with just a few clicks.

In this video, Bernard Sanders from CloudBolt Engineering walks through the integration of Chef Enterprise with the CloudBolt cloud management platform. This includes importing Chef Enterprise Roles and Cookbooks into CloudBolt, enabling end users to directly provision and manage servers and applications in a VMware-backed environment.

Video integration of CloudBolt with Chef Enterprise Configuration Management

Like what you see? Get your own copy of CloudBolt here.

Read More

Topics: Chef, Video

Accelerate DevOps by Combining Automation and Cloud Management

Posted by Justin Nemmers

10/15/14 4:20 PM

The advent of DevOps in Corporate IT has dramatically increased the value that Configuration Management (lately, also known as CM and/or Configuration Automation/Data Center Automation tools) provided in a complex data center environment. Popular examples include Ansible, Puppet, and Chef. Whether your IT organization has implemented an end-to-end DevOps model, or you’re interested in implementing one, the unification of Cloud Management and Data Center Automation is a great way to ensure that your DevOps teams get the most out of IT-provided and supported services and resources.

At the core of highly productive and agile DevOps teams is the rapid access to required resources, and the ability to control what is deployed where. Long wait times for resource provisioning will not just delay release and product, but also likely anger your team. On the other hand, granting the DevOps team unfettered access to on-prem virt and public cloud resources is a capacity planning and potential financial disaster just waiting to happen.

As DevOps automates more of the application management and provisioning process with tooling (Related posting: Why Manual Provisioning Workflows Don't Work Anymore), it becomes more critical to effectively integrate CM with the actual infrastructure. Providing end users and developers alike with access to DevOps work product becomes more complex and challenging.  

Cloud-Management-and-Devops-is-like-PBandJ-72
DevOps and Cloud Management go together like peanut butter and jelly. Each makes the other more awesome. (Image Credit: Shutterstock)

So how does an IT organization achieve maximum value from the time and cost investment in these CM tools? By tightly integrating Cloud Management with their entire stack of CM tools.

Advantages

Using a cloud manager such as CloudBolt to integrate CM with the infrastructure provides immediate value.  By deploying both tools, IT can provide DevOps with: 

  1. Controlled access to required infrastructure, including networks, storage, and public cloud environments.
  2. A single API and UI capable of front-ending numerous providers, which means when IT changes cloud providers, DevOps doesn’t need to re-tool scripts and automations.
  3. Fully automated provisioning and management for real-time resource access. 

CloudBolt allows IT to natively configure and import application and configuration definitions as well as automations directly from your CM tool of choice. End users can then select the desired components, and deploy them onto appropriately sized system or systems in any environment.

IT organizations can put into place hard divisions between critical environments—such that only certain users and groups can deploy systems, services, and applications into specific environments. For instance, CloudBolt will prevent a developer from deploying a test app onto a system that has access to a production network and production data.

Results

Customers that have implemented CloudBolt also are able to chose from one or more CM tools based on capabilities of a specific tool. Does one team prefer Puppet over Chef? Each team can be presented with a discrite slice of underlying infrastructure that makes use of their preferred CM tool(s).

The result is clear: more effective DevOps teams that spend less time dealing accessing resources, and more time getting their work done. IT is happy because CloudBolt enables them to improve governance of entire enterprise IT environments, and finally offers IT the ability to alter underlying infrastructure technology choices in ways that are fully abstracted from end users. By using a single CloudBolt API to access and deploy resources, DevOps isn’t disrupted when IT alters underlying infrastructure technology.

Interested? You can be up and running with CloudBolt today. All you need is access to a Virt Manager or a Cloud Platform, and less than 30 minutes.

Schedule a Demo  or try it yourself

Read More

Topics: Cloud Management, Automation, Puppet, Chef

Private/Public Cloud to Cloud Migration: VM Redeployment vs. Migration

Posted by Justin Nemmers

10/7/14 12:06 PM

We get the question all the time… “Can CloudBolt move my VMs from my private cloud to Amazon... or from Amazon to Azure?"

The answer is the same. “Sure, but how much time do you have?”

Cloud-based infrastructures are revolutionizing how enterprises design and deploy workloads, enabling customers to better mange costs across a variety of needs. Often-requested capabilities like VM migration (or as VMware likes to call it, vMotion) are taken for granted, and increasingly customers are interested in extending these once on-prem-only features to help them move workloads from one cloud to another.

fiber_cables_72

At face value, this seems like a great idea. Why wouldn’t I want to be able to migrate my existing VMs from my on-prem virtualization direct to a public cloud provider?

For starters, it’ll take a really long time.

VM Migration to the Cloud

Migration is the physical relocation (probably a better term) of a VM and it’s data from one environment to another. Migrating an existing VM to the cloud requires:

  1. Copying of every block of storage associated to a VM.
  2. Updating the VM’s network info to work in the new environment.
  3. Lots and lots of time and bandwidth (See #1).

Let’s assume for a minute that you’re only interested in migrating one application from your local VMware infrastructure to Amazon. That application is made up of 5 VMs, each with a 50GiB virtual hard disk. That’s 250 GiB of data that needs to be moved over the wire. (Even if you assume some compression, you will see below how we're still dealing with some large numbers).

At this point, there is only one question that matters: how fast is your network connection?

Transfer Size (GiB)

Upload speed (Mb/s)

Upload Speed (MB/s)

Transfer Time (Seconds)

Transfer Time (Hours)

Time required (Days)

250

1.5

0.1875

10,922,667

3,034.07

126.42

250

10

1.25

16,38,400

455.11

18.96

250

100

12.5

163,840

45.51

1.90

250

250

31.25

65,536

18.20

0.76

250

500

62.5

32,768

9.10

0.38

250

1000

125

16,384

4.55

0.19

250

10000

1250

1,638

0.46

0.02

The result from this chart is clear: the upload speed of your Internet connection is the only thing that matters. And don’t forget that cloud providers frequently charge you for that bandwidth, so your actual cost of transfer will only be limited by how much data you’d like to upload. 

Have more data to migrate? Then you need more bandwidth, more time, or both.

If you want to do this for your entire environment, note that you’re effectively performing SAN mirroring. The same rules of physics apply, and while you can load a mirrored rack of storage on a truck and ship it to your DR site, most public cloud providers won’t line up to accept your gear.

The Atomic Unit of IT is Workload, Not the VM

When customers ask me about migrating VMs, they typically want to run the same workload in a different environment—either for redundancy, or best-fit, etc. If it’s the workload that’s important, why migrate the entire VM?

Componentizing the workload can take work, but automating the application deployment with tools such as Puppet, Chef, or Ansible will make it much easier to deploy that workload into a supported environment.

Redeployment, Not Relocation

If migrating whole stacks of VMs to the cloud isn’t practical, how does an IT organization more effectively redeploy workloads to alternate environments?

Workload redeployment requires a few things:

  1. Mutually required data must be available (i.e. database, etc.);
  2. A configuration management framework available to each desired location, or
  3. Pre-built templates that have all required components pre-installed.

I won’t spend the time here talking through all of these points in detail, but I will say that any of these options requires effort. Whether you’re working to componentize and automate application deployment and management in a CM/automation tool, or re-creating your base OS image and requirements in various cloud providers, you’re going to spend some time getting the pieces in place.

A possible alternative to VM migration is to deploy new workloads in two places simultaneously, and then ensure that needed data and resources are mirrored between the two environments.  In other words, double your costs, and incur the same challenges with data syncing. This approach likely only makes sense for the most critical of production workloads, not the standard developer.

Ultimately, Know Thy Requirements

It seems as though the concept of cloud has caused some people to forget physics. Although migrating/relocating existing VMs to a public cloud provider is an interesting concept, the bandwidth required to effectively accomplish this is either very expensive, or simply not available. Furthermore, VM migration to a public cloud assumes that the performance and availability characteristics of the public cloud provider are the same or better than your on-prem environment… which is a pretty big assumption.

While there are some interesting technologies that are helping with this overall migration event, customers still need to do the legwork to properly configure target environments and networks, not to mention determine which workloads can be effectively moved in the first place. Technology alone cannot replace sound judgment and decision making, and the cloud alone will not solve all of your enterprise IT problems.

And don’t forget that IT governance in the public cloud is much more important than it is in your on-prem environment, because your end users are unlikely to generate large cost overruns when deploying locally. If you don’t control their access to the public cloud, you will eventually get a very rude awakening when you get that next bill.

Want Some Help?

So how does CloudBolt actually satisfy this need? We focus on redeployment and governance. One application, as provided by a CM tool, can be deployed to any target environment. CloudBolt then allows you to define multi-tiered application stacks that can be deployed to any capable target environment. Your users and groups are granted the ability to provision specific workloads/applications into the appropriate target environments, and networks. And strong lifecycle management and governance ensures that your next public cloud provider bill won’t break the bank.

Want to try it now? Let us set you up a no-strings-attached demo environment today.

Schedule a Demo  or try it yourself

Read More

Topics: Network, Cloud, Challenges

Three ways to prevent IT complexity from hindering cloud computing

Posted by Justin Nemmers

8/26/14 3:49 PM

Is IT environment complexity standing in the way of your ability to make better use of cloud computing technologies?  You’re not alone.

My daily conversations with prospects frequently have an undertone: “we’ve got a complexity problem,” they’re saying. Often, these IT organizations are not merely looking for software to help bridge this gap, but are looking for ways to help strategically alter the direction of IT at their business. Ideally, doing so in ways that help them to reduce complexity, unwinding a bit of the tangle that they have created in order to solve problems for which no single-package solution existed at the time.

IT Complexity makes implementing cloud more challenging
Successful IT organizations also tend to be ones that implement simpler solutions.

Cloud computing infrastructure technologies themselves are not necessarily simple, but the ways that IT organizations interface with them often are very well understood and defined. IT organizations want to move away from existing methods of end user access, and toward a more seamless, integrated (i.e. cloud-like) look and feel to their IT enterprise. Ironically, the very complexity that organizations want to solve with cloud-backed technologies becomes a relatively large chasm that must be crossed in order to be successful. The only real answer to this problem is a game-changing approach to how solutions are designed, implemented, and procured.

There are three ways IT organizations can help bridge the complexity chasm in their environments:

Reduce risk with simple solutions

IT risk is incurred when a project requires a significant investment of time and/or money, and has a chance of failing to meet the original business need. The more expensive and time consuming a project is, the higher the risk should it ultimately fail. For this reason, reducing the time and cost required to implement a solution can significantly reduce the risk of that solution. Restated, simple solutions that can be rapidly vetted, installed, configured, and put to use by the business reduce risk by saving time. Restated again, don’t be afraid to fail fast. 

Avoid typical enterprise software buying cycles

With the swipe of a credit card, IT consumers compete with their IT organizations by access a multitude of resources. Shadow IT is certainly costly, but decision makers should take note not just of the technologies their users are purchasing, but also how they’re purchasing them. Look for products that provide needed capability, but that also allow you to break out of the traditional negotiate a huge contract and pricing mechanism (only to have to be repeated in a year). These buying cycles are at odds with the ease-of-access expected with cloud.

Select technologies that ease troubleshooting

Effective troubleshooting is a challenging skill to master, yet complex solutions absolutely require this to be the most developed of an administrator’s skillset. Why is it, then, that many enterprise technologies pile on the complexity in ways that force organizations to rely even more on their staff’s troubleshooting skills? Selecting tools that are able to short-circuit long workflow dependency chains will help IT teams unwind some of the complexity inherent to solving challenging IT needs. For instance, an orchestration event constructed in a hub and spoke model is far easier to diagnose than a branched linear process, as there’s a common point of reference that can indicate exactly what, where, and why a process failed.

In summary, there are frequently many possible solutions to nearly every technical problem, but although they may solve your initial problem, those that are needlessly complex are more likely to create a pile of their own. Conversely, technical solutions that are simple tend to show value quite quickly, enabling the IT team to field a significant quick-win technology to grumpy end users.

Reducing overall complexity in the IT environment removes barriers to new technology adoption, including cloud, and is a critical success requirement on the journey to becoming a more agile IT enterprise.

Need a cloud manager, but scared of the complexity presented by other solutions? Look no further than CloudBolt. Request a download today, and you'll join our happy customers in saying "CloudBolt's power is in its simplicity."
Schedule a Demo
or try it yourself
Read More

Topics: IT Challenges, Agility, IT Self Service

Declare your Independence from Standard Cloud Management

Posted by Justin Nemmers

7/2/14 8:50 AM

Introducing the latest release of CloudBolt C2: v4.5

Connector Updates

With C2 v4.5, we’ve added two new connectors that further expand the breadth of technologies IT organizations can manage from a single-pane-of glass.

Google Compute Engine support gives administrators the ability to seamlessly offer end users controlled access to yet another public cloud provider. This includes the ability to install and manage applications from a supported configuration manager, as well as the ability to include GCE instances in C2 Service Catalog service templates.

Google Cloud Platform CloudBolt C2
Google Compute Engine CloudBolt C2
C2 v4.5 includes support for Google Compute Engine in the Google Cloud Platform.

We’ve also totally re-written and re-based our OpenStack connector. In this update, we’ve focused on compatibility, and we’re now able to support Icehouse, Havana, and Grizzly from the major OpenStack providers such as Mirantis. Of course, C2 can include OpenStack-backed resources when provisioning applications, running external flows, and accounting for licenses, just to name a few. C2 is already the best dashboard for OpenStack, and it’s getting even better with each release. No Horizon development needed!

openstack-cloud-software-vertical-large

We’ve also made some additional updates to our vCenter connector, including improved error handling when your VMware tools are out of date, and allowing for longer Windows hostnames. We’ve also made the Windows disk extending messages more clear and straightforward.

Amazon Web Services has also received some developer love. C2 now synchronizes both the public and private IP addresses for each AWS EC2 instance.

Configuration Management

We worked closely with the engineering team at Puppet, and now have a unique capability: C2 can now discover and import classes from a Puppet server.

Chef integration is even better: C2 now enables Chef bootstrapping on Windows and Ubuntu Linux systems.

User Interface Updates

Updates to the C2 UI are perhaps more subtle, but focused on helping users and administrators more effectively manage large numbers of applications and servers. We’ve integrated simple indicators describing the total number of selected items in each data table, making it much easier to manage large environments.

Did you know that you can use C2 to open network-less console connections on C2-managed servers? We’ve made this feature faster and more reliable in C2 v4.5.

Upgrading

Upgrading C2 is just like any other feature in C2: fast, easy, and predictable. Upgrading to C2 v4.5 is now even faster and easier than before.

Sounds Great, I Want It!

CloudBolt C2 has been recognized by Gartner for our industry-leading time to value. We effectively eliminate the barrier to entry for enterprise Cloud Management. C2 v4.5 is available today. Request a download, and you'll be up and running in your own environment in no time.

Schedule a Demo or try it yourself

Read More

Topics: Upgrade, AWS, Puppet, Chef, OpenStack, GCE

What's New in C2: More Cloud Management Same Convenience

Posted by Justin Nemmers

5/21/14 10:37 AM

It's been a little while since I've blogged about the cool things that we're doing in C2, Beyond doubling our customer count since 1 March, we are also thrilled to be  a 2014 Gartner Cool Vendor in Cloud Management. This announcement has led to even more interesting in our amazing unified IT management and self-service IT platform. Follow that up with an upcoming GigaOM Structure participation, and it’s even more clear that we’re gaining significant traction across the industry.

CloudBolt Gartner Cool Vendor Cloud Management 2014

Throughout all of these exciting developments, our engineering team continues to innovate C2. They're constantly adding new capabilities and features that we know will help IT organizations change the way they work with and communicate to the business they support.

What's new in CloudBolt C2

Since the release of C2 v4.4.1, we've produced two additional releases, capped by the 19 May release of C2 v4.4.3. We've focused on adding capabilities that will empower IT organizations to provide end users with greater levels of controlled access to and management of IT resources and applications, all in a manner that enables the IT organization full control over governance, as well as cost transparency.

By focusing on lifecycle management, we’ve created more valuable touch points in C2 that enable IT organizations to more effectively unify multiple environments, easing the management burden that often increases and complexity grows. Of course an offshoot of simplifying complex environments is that it frees IT staff to focus on value-added tasks, such as developing new offerings to the business.

Orchestration, Modeling, and Customization

Complex environments typically have workflows that require numerous custom parameters and actions based on the values of those parameters.  C2 can now associate flows with server parameters, such that the changing of a parameter will result in a flow execution. Additionally, parameters can now have cost values assigned to them.  The result is that administrators can expose a parameter—let’s use “Enable Monitoring” as an example—and execute a workflow to actually enable or disable monitoring when the parameters is changed.  Also, when the parameter is enabled, C2 will add an additional charge to the showback reporting for that instance or application.

Self-service IT order application with chef integration

We’ve also made the creation and maintenance of parameters cleaner and easier in our intuitive user interface.

Configuration Management

Quickly integrating Configuration Management tools to enable IT organizations to automate application installation and maintenance is a reason many customers deploy C2.  Despite our integrations already being leaps-and-bounds easier to use than other vendors, we’ve added additional improvements to this core capability, too.  We’ve tightened up both our Puppet Labs and Enterprise Chef connectors, including key enterprise capabilities present in both those tools. We’ve added improvements that will aide organizations that have a high rate of VM and application churn, as well as some UI updates that make it easier to manage application lifecycles.

Want to see how easy this is? I challenge you to install an additional application on an existing stack using another tool.  And then try the same thing with CloudBolt C2.

API

Our API v2 was build by developers that interface with other vendors’ APIs on a daily basis, so you can imagine that we know a thing or two about how to build a great API. Since C2 v4.4.1, we’ve added more capabilities to the v2 API, and C2 now ships with several example CLI scripts that will be useful to any developer interested in programmatic access to C2’s extensive capabilities.

Provisioning and Lifecycle Updates

We’ve been listening to our customers that are tired of managing multiple Windows templates for each required instance disk size.  C2 can now auto-extend the primary windows disk when you select a larger storage size in an order form.

Users can now request additional disk space not just at provisioning, but at any point in the VM’s lifecycle, and that disk can be thin, thick, or eager-zero provisioned.

Challenged by vCenter’s lack of customization support for CentOS? C2 will automatically solve that problem, too—CentOS VMware customizations through C2 will now work properly like they used to on previous versions of vCenter.

Have other ideas that will make your life easier?  We’re always listening.

Unified IT Management. Today.

If you’re looking at Cloud Management Platforms because you have an active project underway, or if you’re just kicking the tires, the time is right to consider CloudBolt C2. We’re constantly working to lower the barrier of entry to enterprise software to levels previously unseen. CloudBolt C2 is cloud, made easy.

Schedule a Demo to get started, or or try it yourself . And let us know what you think!

Read More

Topics: Feature, Cloud Management, Upgrade

Why C2 is Important When Adopting OpenStack

Posted by Justin Nemmers

5/14/14 9:49 PM

“If I’m moving to OpenStack, why do I need a Cloud Manager like CloudBolt C2?”

As organizations look to extend their footprints beyond the traditional virtualization infrastructure providers (read: VMware), we hear questions like this both more frequently, and with more fervor. It’s a good question. At face value, many people see projects and products like OpenStack, and just assume that they compete directly with CloudBolt C2, but actually, when used together, the two products each provide distinct benefits that are absolutely game changing.

OpenStack Cloud Software

Despite the influx of added code and interest in Horizon, this still represents a rather significant, and complex barrier to full OpenStack adoption in the enterprise.  In my conversations with many large organizations that are implementing OpenStack, it’s become apparent that nearly every single one is either writing their own non Horizon-based front-end interface on top of OpenStack, or purchasing a commercially-available front-end (i.e. CloudBolt C2). Those organizations that are developing their own UIs are effectively signing up to maintain that code and project in-house for the life of their OpenStack environment.

Why C2?

We can look deeper into one example: updating a UI option for an instance order form. In Horizon, it requires advanced knowledge of Django and Python, and creates upgrade problems down the road. (Random aside: Want more info on UI and how difficult it is to make a good one? Read more here.) In C2, updating the order process takes a non-developer just a few clicks. Add to that C2’s built-in rates, quotas, ongoing server/application management, and software license management, and the potential value-add to the build vs. buy decision becomes quite real.

Beyond the configurability of the interface itself, there is the question of choice, and existing complexity. Chances are your IT environment contains a significant number of technologies—some of which will integrate well with OpenStack, and others that will not. And then, it apparently does matter which vendor’s OpenStack you decide to purchase, given Red Hat’s ominous announcement at the OpenStack Summit about their impending support policy changes.

Despite this concerning policy shift, OpenStack vendors will continue expanding support for proprietary tools and platforms, but are unlikely to solve the equation for every technology present in typical IT organizations’ legacy environments.  In the end, OpenStack-- from any vendor--  will force a choice: roll your own capability, or replace what you’ve got with something more OpenStack friendly. Using C2 can ease this transition by managing everything in the environment- OpenStack, legacy systems, public cloud providers, configuration management systems, etc.. End users will not know where their servers and applications are actually being deployed. IT again owns the decision of the best underlying environment for the workload.

Given these points, the difficulty of implementation and ongoing support of your existing infrastructure and environments means that the only real scenario when implementing OpenStack is to run two environments in parallel—one is your existing environment making continued use of existing integrations and technologies—and the second is the new OpenStack-based one, which will largely be a re-implementation and re-basing of both technology and process. The IT organization can then begin the task of migrating workloads from the legacy environment to OpenStack.

When run alongside existing IT, new environments absolutely benefit from a unified visualization, reporting, quotas, access, and management. This is another reason why C2 is still important in enterprises that are moving to OpenStack. Few organizations that are investing in OpenStack immediately replace their existing technology. Their environments are a mix of legacy and modern, and they need to find ways to effectively manage those stacks. Rapidly growing businesses also frequently need to ingest infrastructure and technology from acquired companies.

OpenStack is gaining significant momentum in IT, and for good reason. IT organizations looking for ways to further commoditize their technology stacks see OpenStack as a great way to build and maintain a standards-based private cloud environment, and they’re largely right. C2 is a critical component into easing the adoption of not just OpenStack, but also other disruptive technologies.

Ready to get started? Schedule a Demo

Read More

Topics: News, IT Challenges, OpenStack

The People Side of Cloud Computing

Posted by Justin Nemmers

3/26/14 2:55 PM

 (Originally posted in the In-Q-Tel Quarterly)

The cloud-enabled enterprise fundamentally changes how personnel interact with IT. Users are more effective and efficient when they are granted on-demand access to resources, but these changes also alter the technical skill-sets that IT organizations need to effectively support, maintain, and advance their offerings to end users. Often, these changes are not always immediately obvious. Automation may be the linchpin of cloud computing, but the IT staff’s ability to effectively implement and manage a cloud-enabled enterprise is critical to the IT organization’s success and relevance. Compounding the difficulties, all of the existing legacy IT systems rarely just “go away” overnight, and many workloads, such as large databases, either don’t cleanly map to cloud-provided infrastructure, or would be cost-prohibitive when deployed there. The co-existence of legacy infrastructure, traditional IT operations, and cloud-enabled ecosystems create a complicated dance that seasoned IT leadership and technical implementers alike must learn to effectively navigate.

In-Q-Tel Quarterly Image

In the past five or so years, and as enterprise IT organizations have considered adopting cloud technologies, I’ve seen dozens of IT organizations fall into the trap of believing that increased automation will enable them to reduce staff. In my experience, however, staff reductions rarely happen.  IT organizations that approach cloud-enabled IT as a mechanism to reduce staffing are often surprised to find that these changes do not actually reduce complexity in the environment, but instead merely shift complexity from the operations to the applications team. For instance, deploying an existing application to Amazon Web Services (AWS) will not make it highly available.  Instead of IT administrators using on-premises software tools with reliable access—and high speed, low-latency network and storage interconnects—these administrators must now master concepts such as regions, availability zones, and the use of elastic load balancers. Also, applications often need to be modified or completely re-designed to increase fault tolerance levels. The result is that deployments are still relatively complex, but they often require different skillsets than a traditional IT administrator is likely to have.

A dramatic shift in complexity is one of the reasons why retraining is important for existing IT organizations.  Governance is another common focus area that experiences significant capability gains as a result of cloud-enabled infrastructure.  Automation ensures that every provisioned resource successfully completes each and every lifecycle management step 100% of the time.  This revelation will be new to both IT operations and end users. I’ve also frequently seen components of the IT governance mechanism totally break down due to end user revolt—largely because particularly onerous processes could be skipped by the administrators as they manually provisioned resources.

Cloud-based compute resources will dramatically change the computing landscape in nearly any organization I’ve dealt with. For example, one IT Director worked to automate his entire provisioning and lifecycle management process, which resulted in freeing up close to three FTE’s (Full Time Equivalent) worth of team time.  Automating their processes and offering end users on-demand access to resources helped their internal customers, but it also generated substantial time savings for that team. The IT director also recognized what many miss: the cloud offerings may shift complexity in the stack, but ultimately all of those fancy cloud instances are really just Windows and Linux systems. Instances that still require traditional care and feeding from IT. Tasks such as Active Directory administration, patch management, vulnerability assessment, and configuration management don’t go away.

Another common learned-lesson that I have witnessed is that with shifting complexity comes dependence on new skills in the availability and monitoring realms. Lacking access to physical hardware, storage, and network infrastructure does not remove them as potential problem areas. As a result, Ihave seen organizations too slowly realize that applications need to be more tolerant of failures than they were under previous operating models.  Making applications more resilient requires different skills that traditional IT teams need to learn and engrain in order to effectively grow into a cloud-enabled world. Additionally, when developers and quality assurance teams have real-time access to needed resources, they also tend to speed up their releases, placing an increased demand on the workforce components responsible for tasks such as release engineering, release planning, and possibly even marketing, etc.

I’ve encountered few customers that have environments well suited for a complete migration to the public cloud. While a modern-day IT organization needs to prepare for the inevitability of running workloads in the public or community clouds, they must also prepare for the continued offering of private cloud services and legacy infrastructures. Analyst firms such as Gartner suggest that the appropriate path forward for IT orgs is to become a broker/provider of services. The subtext of that statement is that IT teams must remain in full control over who can deploy what, and where. IT organizations must control which apps can be deployed to a cloud, and which clouds are acceptable based on security, cost, capability, etc. Future IT teams should be presenting users with a choice of applications or services based on that user’s role, and the IT team gets to worry about the most appropriate deployment environment. When this future materializes, these are all new skills IT departments will need to master. Today, analyzing cloud deployment choices and recommending the approaches that should be made available are areas that typically fall outside the skillsets of many IT administrators. Unfortunately, these are precisely the skills that are needed, but I’ve witnessed many IT organizations overlook them. 

The Way Ahead

While IT staff can save significant time when the entirety of provisioning and lifecycle management is automated, there are still many needs elsewhere in the IT organization.  The successful approaches I’ve seen IT organizations use all involve refocusing staff to value-added tasks. When IT administrators are able to spend time on interesting problems rather than performing near-constant and routine provisioning and maintenance, they are often more involved, fulfilled, and frequently produce innovative solutions that save organizations money. Changing skillsets and requirements will also have a likely affect on existing contracts for organizations with heavily outsourced staffing.  

Governance is another important area where changes in the status quo can lead to additional benefits. For example, manually provisioned and managed environments that also have manual centralized governance processes and procedures typically have significant variance in what is actually deployed vs. what the process says should have been deployed: i.e. processes are rarely followed as closely as necessary. No matter how good the management systems, without automation and assignment, problems like Virtual Machine “sprawl” quickly become rampant. I’ve also seen scenarios where end users revolt because they were finally subjected to policies that had been in place for a while, but were routinely skipped by administrators manually provisioning systems. Implementing automation means being prepared to retool some of the more onerous policies as needed, but even with retooled processes, automated provisioning and management provides for a higher assurance level than is possible with manual processes.

Automation in IT environments is nothing new. However, today’s IT organizations can no longer solely rely on the traditional operational way of doing things. Effective leadership of IT staff is critical to the organization’s ability to successfully transition from a traditional provider of in-house IT to an agile broker/provider of resources and services.  Understanding the cloud impacts much more than just technology is a great place to start.  This doesn’t mean that organizations that are currently implementing cloud-enabling solutions need to jam on the brakes, just realize that the cloud is not a magic cure-all for staffing issues. Organizations need to evaluate the potential impact of shifting complexity to other teams, and generally plan for disruption. Just as you would with any large-scale enterprise technology implementation, ensuring that IT staff has the appropriate skills necessary to successfully implement and maintain the desired end state will go a long way to ensuring your success.  


 

Justin Nemmers is the Executive Vice President of Marketing at CloudBolt Software, Inc.CloudBolt’s flagship product, CloudBolt C2, is a unified IT management platform that provides self-service IT and automated management/provisioning of on-premises and cloud-based IT resources.  Prior to joining the team at CloudBolt, Nemmers has held both technical and sales-focused leadership roles at Salsa Labs, and Red Hat, where he ran government services. Nemmers resides in Raleigh, NC with his wife and daughter.

Read More

Topics: IT Challenges, Cloud, People

API v2, Chef Roles and Orgs, AWS Elastic IPs, and Add VMware Disks

Posted by Justin Nemmers

3/18/14 12:14 PM

We’re pleased to announce the immediate availability of CloudBolt C2 v4.4.1.

This release is jam-packed with new capabilities intended help IT organizations better manage and access their existing IT resources—not just through the provisioning process, but through the entire lifecycle management process as well.

Even before the C2 v4.4.1 update, customers have some pretty awesome things to say about us:

Upon completing a 2.5 day PoC with a large complicated user:

"You accomplished more on day one than VMware did in two weeks."

Other praise:

"If we were to build something, this is exactly what it would look like."
“This is completely plug and play.”
“Wow, that seemed almost too easy.”

And my personal favorite… Upon quickly knocking out a Chef and vCloud Orchestrator use case that vexed every other vendor:

Customer, to co-workers walking by office:
“Dude, you have to see this. It’s [explicative] awesome!”

C2 v4.4.1 builds on an already awesome product.

CloudBolt C2 v4.4.1 features a completely re-designed API. Our new API layer enables the programmatic control of C2 by parties that prefer a command line-based interface, or want to cleanly integrate C2 into an existing scripted process.  While this capability isn’t new to CloudBolt, it’s much improved and deeper functioning in C2.

C2 API browserC2's built-in API browser greatly aides development against the new C2 API v2

VMware

You talked and we listened. Customers asked us for better management of new and existing VM virtual disks in VMware. C2 users can now add new disks to VMware-backed VMs. C2 also ingests more information about existing VM disks.

vmware datastore add disk virtual machineUsers can add additional virtual disks to VMware VMs

Customers were also looking for a way to manually set the root password on Linux-based VMware guests.  Despite the VMware API not directly supporting this, we’ve developed a way to allow users to specify a new root password at provisioning time, and C2 will ensure that the provisioned instances will be accessible using that password.

Configuration Managers: Chef and Enterprise Chef Configuration Management

The Chef connector has been enhanced to provide support for the import and management of Chef Roles.  Now users can interface with and select Chef Roles for assignment and deployment to servers both at provisioning time and in an ongoing manner. Along the same lines as roles, C2 v4.4.1 also includes a new Chef Community cookbook importer in the UI. Browse and import community-provided Chef recipes and Cookbooks. 

chef configuration management roles

Running Enterprise Chef?  We haven’t forgotten about you. In fact, C2 boasts the industry’s best Chef integration, and we’re expanding that important relationship to include integration with Enterprise Chef features and capabilities. C2 v4.4.1 adds support for Enterprise Chef organizations. We’ve also added support for hosted Chef, and for those of organizations using Chef to manage software on EC2-based instances, we now support communication with AWS-based Chef servers, but also the deployment of Roles, CookBooks, and Recipes to EC2-based servers.

Configuration Managers: Puppet

We didn’t forget about our Puppet Labs integration, either. In this latest release, we’ve expanded the details C2 collects about the Puppet nodes. The latest Puppet configuration management report status and link to the entire report are now available from the Puppet connector details page.

Amazon Web Services

C2 now has deeper support for EC2 and related components. First, users can now directly manage AWS Elastic IP addresses right from C2—both at provisioning time, and in an ongoing basis. In addition to detecting and importing AWS availability zone metadata, C2 now supports assignment of a specific actual availability zone within a region.

Amazon web services assign elastic IP addressUsers can now select and associate AWS Elastic IP addresses from within C2.

Usability Improvements

Don’t forget that we use C2 to manage our own It environments. This helps us identify places where C2 could be a little more usable after a few small tweaks. In this release, we’ve made a number of these little tweaks, but I’ll discuss a few of the more important ones here. C2 v4.4.1 now automatically validates IP addresses when entered on the order form.  We also noticed that the latest Firefox web browser update broke C2’s built-in console access application. C2 v4.4.1 fixes that. We’ve also added the ability to download a job log file directly from the UI—no need to log into the actual C2 instance.  Lastly, and thanks to a customer that uses C2 to manage 10k+ VMs, and hundreds of OS templates, we’ve drastically improved the performance of VMware OS template import.

How to get it

If you haven’t yet seen C2 in action, get started here.  Ready to kick the tires yourself?  Request a download. Already runnung C2? Log into our support portal to download the Cloudbolt C2 upgrade today.

Read More

Topics: Feature, VMware, Puppet, Chef