CloudBolt Software Logo

CloudBolt Blog

CloudBolt Software Announces Revolutionary CloudGoat Product

Posted by Bernard Sanders

4/1/16 6:28 PM

CloudBolt Software announced today the release of a new product in its portfolio - CloudGoat. For five years, CloudBolt's award-winning, cornerstone product has enabled its customers to achieve hybrid cloud management and self-service provisioning, turning their existing datacenters into a cloud provider, providing IaaS and PaaS. The new product is an exciting foray into the new field of GaaS (Goat as a Service).

Bernard Sanders, CTO of CloudBolt Software emphasized that CloudBolt has been lauded for its ability to manage brownfield (pre-existing) environments to the same level that it can manage greenfield (newly built) environments. This new CloudGoat product offering now allows enterprise IT departments to turn actual green fields into actual brown fields. Just as it has done with its flagship CloudBolt product, the company will be making CloudGoat available for download free of charge (trial licenses cover up to 25 VMs/goats), and expect that this product will also be recognized as the best in its class (though it is unclear at this time what class that is).

Screen_Shot_2016-04-01_at_2.38.41_PM.png

Screen_Shot_2016-04-01_at_2.50.22_PM.png

Read More

Topics: Consumability, Cloud, Services

You are not expected to understand this

Posted by Ephraim Baron

8/12/15 10:30 AM

I love the history of technology.  My favorite place in Silicon Valley is the Computer History Museum.  It’s a living timeline of computing technology, where each of us can find the point when we first joined the party.

It’s great to learn about technology pioneers – the geek elite.  Years ago I took a course on computer operating systems.  We were studying the evolution of UNIX, and we’d gotten to Lions’ Commentary on UNIX 6th Edition, circa 1977.  (As an aside, the entire UNIX operating system at that time was less than 10,000 lines of code.  By 2011 the Linux kernel alone required 15 million lines and 37,000 files.)  As we studied the process scheduler section, we came to one of the great “nerdifacts” of computer programming, line 2238, a comment which reads:

* You are not expected to understand this.

Daunting technology

That one line perfectly expresses my joys and frustrations with computing.  The joy comes from the confirmation that computers can do amazingly clever things.  The frustration is from the dismissive way I’m reminded of my inferiority.  And I think that sums up how most people feel about technology.

“Your call is important to us. Please continue to hold.”

In the corporate world, end users have a love-hate relationship with their IT departments.  It’s true that they help us to do our jobs.  But rather than giving us what we need, when we need it, our IT folks seem to always be telling us why our requests cannot be fulfilled.  Throughout my career I’ve been on both sides of this conversation.  Early on, I was the requester/supplicant who’d make my pleas to IT for services or support, only to be told to go away and come back on a day that didn’t end in ‘y’.  

notYes

Later, I was the IT administrator, then manager.  In those roles I was the person saying ‘no’ – far more often than I wanted.  It wasn’t because I got perverse pleasure out of disappointing people.  That was just the way my function was structured, measured, and delivered.

Almost without exception, the two metrics that drove my every action in IT operations were cost and uptime.  Responsiveness and customer satisfaction were not within my charter.  Simply put, I got no attaboys for doing things quickly.  While this certainly annoyed my customers, they knew and I knew that they had no alternatives.

The Age of Outsourcing

Things began to change in the late 1980’s and early 1990’s (yeah, I go back a ways) when large companies decided to try throwing money at their IT problems to make them go away.  So began the age of IT outsourcing, when companies tried desperately to disown in-house computer operations.  Such services were “outside of our core competency”, they reasoned, and so were better performed by seasoned professionals from large companies with three-letter names like IBM, EDS, and CSC.

Outsourcing question

Fast-forward 25 years and we find the IT outsourcing (ITO) market in decline.  There are many reasons for this.  The most common are:

  • Actual savings are often far less than projected
  • Long-term contracts limit flexibility, particularly in a field that changes as constantly as IT
  • There is an inherent asymmetry of goals between service provider and service consumer
  • Considerable effort is required to manage and monitor contracts and SLA compliance
  • New technologies like cloud computing offer viable alternatives

Just as video killed the radio star, cloud computing is a fresher, sexier alternative to ITO for enterprises searching for the all-important “competitive advantage”.

Power to the People!

Cloud computing isn’t just new wine in old bottles; it’s a fundamental change in the way computing resources are made available and consumed.  Cloud computing focuses on user needs (the ‘what’) rather than underlying technology (the ‘how’).

The National Institute of Standards and Technology (NIST) defines five essential characteristics of cloud computing.  One of these is ‘On-demand self-service’.  Think about what that means.  For the end user, it means getting what we need, when we need it.  For business, it means costs that align with usage, for services that make sense.  And for IT, it means being able to say ‘yes’ for a change.NIST cloud model

For too long, we have been held captive by technology.  Cloud computing promises to free us from technology middlemen.  It enables us to consume services that we value.

At its core, cloud computing is technology made understandable.

CloudBolt is a cloud management platform that enables self-service IT.  It allows IT organizations to define ready-to-use systems and environments, and to put them in the hands of their users.  Isn’t that a welcome change?

Learn more about self-service IT

Read More

Topics: Customer, Cloud, Services, Agility, IT Self Service, Self Service IT

Running a Hybrid Cloud? Are you tracking usage and costs?

Posted by Chris Moore

7/27/15 3:57 PM

At Microsoft Ignite 2015, 72 percent of IT professionals polled said cloud usage and cost tracking are essential for business management. When not in conflict with other departments, many administrators struggle with efficiently tracking resource usage and costs. This issue was too clear with a major networking vendor. Their administrators spent countless hours each month manually tracking resource consumption. Since this method was prone to human error, the vendor deployed CloudBolt to automate their reporting process. Doing so allowed them to improve the accuracy of cloud usage and cost tracking across five hypervisors and public clouds.

In addition to manual cost tracking, some administrators also manually control resource distribution. Due to limited IT resources, a leading data storage provider’s administrators are afraid of end-users spinning up VMs from a self-service IT portal. The concern came from the idea that administrators would be unable to control how many VMs end-users provisioned. In response to this concern, CloudBolt allows administrators to set quotas that prevent end-users from running over their allotted resources. CloudBolt also allows administrators to set thresholds that alert them when resources are reaching max capacity.

Taking an automated approach lightens administrators’ workload, allowing them to be more productive in areas that were previously neglected. So, whether a company needs to improve the provisioning process, measuring or controlling of IT resource consumption, they should consider deploying a self-service portal. Fortune 500 companies, educational institutions, government agencies and even the City of London have recognized the need for automated self-service tools, and they all chose CloudBolt as the solution.

CloudBolt has been recognized for its market leading time to value. With that in mind, simply submit a download request and test it out at any time.

Read More

Topics: CMP, Cloud Management, VMware, Cloud, Shadow IT, Hybrid Cloud

Private/Public Cloud to Cloud Migration: VM Redeployment vs. Migration

Posted by Justin Nemmers

10/7/14 12:06 PM

We get the question all the time… “Can CloudBolt move my VMs from my private cloud to Amazon... or from Amazon to Azure?"

The answer is the same. “Sure, but how much time do you have?”

Cloud-based infrastructures are revolutionizing how enterprises design and deploy workloads, enabling customers to better mange costs across a variety of needs. Often-requested capabilities like VM migration (or as VMware likes to call it, vMotion) are taken for granted, and increasingly customers are interested in extending these once on-prem-only features to help them move workloads from one cloud to another.

fiber_cables_72

At face value, this seems like a great idea. Why wouldn’t I want to be able to migrate my existing VMs from my on-prem virtualization direct to a public cloud provider?

For starters, it’ll take a really long time.

VM Migration to the Cloud

Migration is the physical relocation (probably a better term) of a VM and it’s data from one environment to another. Migrating an existing VM to the cloud requires:

  1. Copying of every block of storage associated to a VM.
  2. Updating the VM’s network info to work in the new environment.
  3. Lots and lots of time and bandwidth (See #1).

Let’s assume for a minute that you’re only interested in migrating one application from your local VMware infrastructure to Amazon. That application is made up of 5 VMs, each with a 50GiB virtual hard disk. That’s 250 GiB of data that needs to be moved over the wire. (Even if you assume some compression, you will see below how we're still dealing with some large numbers).

At this point, there is only one question that matters: how fast is your network connection?

Transfer Size (GiB)

Upload speed (Mb/s)

Upload Speed (MB/s)

Transfer Time (Seconds)

Transfer Time (Hours)

Time required (Days)

250

1.5

0.1875

10,922,667

3,034.07

126.42

250

10

1.25

16,38,400

455.11

18.96

250

100

12.5

163,840

45.51

1.90

250

250

31.25

65,536

18.20

0.76

250

500

62.5

32,768

9.10

0.38

250

1000

125

16,384

4.55

0.19

250

10000

1250

1,638

0.46

0.02

The result from this chart is clear: the upload speed of your Internet connection is the only thing that matters. And don’t forget that cloud providers frequently charge you for that bandwidth, so your actual cost of transfer will only be limited by how much data you’d like to upload. 

Have more data to migrate? Then you need more bandwidth, more time, or both.

If you want to do this for your entire environment, note that you’re effectively performing SAN mirroring. The same rules of physics apply, and while you can load a mirrored rack of storage on a truck and ship it to your DR site, most public cloud providers won’t line up to accept your gear.

The Atomic Unit of IT is Workload, Not the VM

When customers ask me about migrating VMs, they typically want to run the same workload in a different environment—either for redundancy, or best-fit, etc. If it’s the workload that’s important, why migrate the entire VM?

Componentizing the workload can take work, but automating the application deployment with tools such as Puppet, Chef, or Ansible will make it much easier to deploy that workload into a supported environment.

Redeployment, Not Relocation

If migrating whole stacks of VMs to the cloud isn’t practical, how does an IT organization more effectively redeploy workloads to alternate environments?

Workload redeployment requires a few things:

  1. Mutually required data must be available (i.e. database, etc.);
  2. A configuration management framework available to each desired location, or
  3. Pre-built templates that have all required components pre-installed.

I won’t spend the time here talking through all of these points in detail, but I will say that any of these options requires effort. Whether you’re working to componentize and automate application deployment and management in a CM/automation tool, or re-creating your base OS image and requirements in various cloud providers, you’re going to spend some time getting the pieces in place.

A possible alternative to VM migration is to deploy new workloads in two places simultaneously, and then ensure that needed data and resources are mirrored between the two environments.  In other words, double your costs, and incur the same challenges with data syncing. This approach likely only makes sense for the most critical of production workloads, not the standard developer.

Ultimately, Know Thy Requirements

It seems as though the concept of cloud has caused some people to forget physics. Although migrating/relocating existing VMs to a public cloud provider is an interesting concept, the bandwidth required to effectively accomplish this is either very expensive, or simply not available. Furthermore, VM migration to a public cloud assumes that the performance and availability characteristics of the public cloud provider are the same or better than your on-prem environment… which is a pretty big assumption.

While there are some interesting technologies that are helping with this overall migration event, customers still need to do the legwork to properly configure target environments and networks, not to mention determine which workloads can be effectively moved in the first place. Technology alone cannot replace sound judgment and decision making, and the cloud alone will not solve all of your enterprise IT problems.

And don’t forget that IT governance in the public cloud is much more important than it is in your on-prem environment, because your end users are unlikely to generate large cost overruns when deploying locally. If you don’t control their access to the public cloud, you will eventually get a very rude awakening when you get that next bill.

Want Some Help?

So how does CloudBolt actually satisfy this need? We focus on redeployment and governance. One application, as provided by a CM tool, can be deployed to any target environment. CloudBolt then allows you to define multi-tiered application stacks that can be deployed to any capable target environment. Your users and groups are granted the ability to provision specific workloads/applications into the appropriate target environments, and networks. And strong lifecycle management and governance ensures that your next public cloud provider bill won’t break the bank.

Want to try it now? Let us set you up a no-strings-attached demo environment today.

Schedule a Demo   or try it yourself

Read More

Topics: Network, Cloud, Challenges

The People Side of Cloud Computing

Posted by Justin Nemmers

3/26/14 2:55 PM

 (Originally posted in the In-Q-Tel Quarterly)

The cloud-enabled enterprise fundamentally changes how personnel interact with IT. Users are more effective and efficient when they are granted on-demand access to resources, but these changes also alter the technical skill-sets that IT organizations need to effectively support, maintain, and advance their offerings to end users. Often, these changes are not always immediately obvious. Automation may be the linchpin of cloud computing, but the IT staff’s ability to effectively implement and manage a cloud-enabled enterprise is critical to the IT organization’s success and relevance. Compounding the difficulties, all of the existing legacy IT systems rarely just “go away” overnight, and many workloads, such as large databases, either don’t cleanly map to cloud-provided infrastructure, or would be cost-prohibitive when deployed there. The co-existence of legacy infrastructure, traditional IT operations, and cloud-enabled ecosystems create a complicated dance that seasoned IT leadership and technical implementers alike must learn to effectively navigate.

In-Q-Tel Quarterly Image

In the past five or so years, and as enterprise IT organizations have considered adopting cloud technologies, I’ve seen dozens of IT organizations fall into the trap of believing that increased automation will enable them to reduce staff. In my experience, however, staff reductions rarely happen.  IT organizations that approach cloud-enabled IT as a mechanism to reduce staffing are often surprised to find that these changes do not actually reduce complexity in the environment, but instead merely shift complexity from the operations to the applications team. For instance, deploying an existing application to Amazon Web Services (AWS) will not make it highly available.  Instead of IT administrators using on-premises software tools with reliable access—and high speed, low-latency network and storage interconnects—these administrators must now master concepts such as regions, availability zones, and the use of elastic load balancers. Also, applications often need to be modified or completely re-designed to increase fault tolerance levels. The result is that deployments are still relatively complex, but they often require different skillsets than a traditional IT administrator is likely to have.

A dramatic shift in complexity is one of the reasons why retraining is important for existing IT organizations.  Governance is another common focus area that experiences significant capability gains as a result of cloud-enabled infrastructure.  Automation ensures that every provisioned resource successfully completes each and every lifecycle management step 100% of the time.  This revelation will be new to both IT operations and end users. I’ve also frequently seen components of the IT governance mechanism totally break down due to end user revolt—largely because particularly onerous processes could be skipped by the administrators as they manually provisioned resources.

Cloud-based compute resources will dramatically change the computing landscape in nearly any organization I’ve dealt with. For example, one IT Director worked to automate his entire provisioning and lifecycle management process, which resulted in freeing up close to three FTE’s (Full Time Equivalent) worth of team time.  Automating their processes and offering end users on-demand access to resources helped their internal customers, but it also generated substantial time savings for that team. The IT director also recognized what many miss: the cloud offerings may shift complexity in the stack, but ultimately all of those fancy cloud instances are really just Windows and Linux systems. Instances that still require traditional care and feeding from IT. Tasks such as Active Directory administration, patch management, vulnerability assessment, and configuration management don’t go away.

Another common learned-lesson that I have witnessed is that with shifting complexity comes dependence on new skills in the availability and monitoring realms. Lacking access to physical hardware, storage, and network infrastructure does not remove them as potential problem areas. As a result, Ihave seen organizations too slowly realize that applications need to be more tolerant of failures than they were under previous operating models.  Making applications more resilient requires different skills that traditional IT teams need to learn and engrain in order to effectively grow into a cloud-enabled world. Additionally, when developers and quality assurance teams have real-time access to needed resources, they also tend to speed up their releases, placing an increased demand on the workforce components responsible for tasks such as release engineering, release planning, and possibly even marketing, etc.

I’ve encountered few customers that have environments well suited for a complete migration to the public cloud. While a modern-day IT organization needs to prepare for the inevitability of running workloads in the public or community clouds, they must also prepare for the continued offering of private cloud services and legacy infrastructures. Analyst firms such as Gartner suggest that the appropriate path forward for IT orgs is to become a broker/provider of services. The subtext of that statement is that IT teams must remain in full control over who can deploy what, and where. IT organizations must control which apps can be deployed to a cloud, and which clouds are acceptable based on security, cost, capability, etc. Future IT teams should be presenting users with a choice of applications or services based on that user’s role, and the IT team gets to worry about the most appropriate deployment environment. When this future materializes, these are all new skills IT departments will need to master. Today, analyzing cloud deployment choices and recommending the approaches that should be made available are areas that typically fall outside the skillsets of many IT administrators. Unfortunately, these are precisely the skills that are needed, but I’ve witnessed many IT organizations overlook them. 

The Way Ahead

While IT staff can save significant time when the entirety of provisioning and lifecycle management is automated, there are still many needs elsewhere in the IT organization.  The successful approaches I’ve seen IT organizations use all involve refocusing staff to value-added tasks. When IT administrators are able to spend time on interesting problems rather than performing near-constant and routine provisioning and maintenance, they are often more involved, fulfilled, and frequently produce innovative solutions that save organizations money. Changing skillsets and requirements will also have a likely affect on existing contracts for organizations with heavily outsourced staffing.  

Governance is another important area where changes in the status quo can lead to additional benefits. For example, manually provisioned and managed environments that also have manual centralized governance processes and procedures typically have significant variance in what is actually deployed vs. what the process says should have been deployed: i.e. processes are rarely followed as closely as necessary. No matter how good the management systems, without automation and assignment, problems like Virtual Machine “sprawl” quickly become rampant. I’ve also seen scenarios where end users revolt because they were finally subjected to policies that had been in place for a while, but were routinely skipped by administrators manually provisioning systems. Implementing automation means being prepared to retool some of the more onerous policies as needed, but even with retooled processes, automated provisioning and management provides for a higher assurance level than is possible with manual processes.

Automation in IT environments is nothing new. However, today’s IT organizations can no longer solely rely on the traditional operational way of doing things. Effective leadership of IT staff is critical to the organization’s ability to successfully transition from a traditional provider of in-house IT to an agile broker/provider of resources and services.  Understanding the cloud impacts much more than just technology is a great place to start.  This doesn’t mean that organizations that are currently implementing cloud-enabling solutions need to jam on the brakes, just realize that the cloud is not a magic cure-all for staffing issues. Organizations need to evaluate the potential impact of shifting complexity to other teams, and generally plan for disruption. Just as you would with any large-scale enterprise technology implementation, ensuring that IT staff has the appropriate skills necessary to successfully implement and maintain the desired end state will go a long way to ensuring your success.  


 

Justin Nemmers is the Executive Vice President of Marketing at CloudBolt Software, Inc.CloudBolt’s flagship product, CloudBolt C2, is a unified IT management platform that provides self-service IT and automated management/provisioning of on-premises and cloud-based IT resources.  Prior to joining the team at CloudBolt, Nemmers has held both technical and sales-focused leadership roles at Salsa Labs, and Red Hat, where he ran government services. Nemmers resides in Raleigh, NC with his wife and daughter.

Read More

Topics: IT Challenges, Cloud, People

7 Takeaways From the Red Hat Summit

Posted by Justin Nemmers

6/19/13 8:27 AM

CloudBolt Booth Red Hat Summit Boston John Menkart Justin Nemmers Colin Thorp Jesse NewellPart of the CloudBolt team at Red Hat Summit 2013.  Sales Director Milan Hemrajani took the picture.

A few sales folks and I have returned from a successful Red Hat Summit in Boston, MA. With over 4,000 attendees, we were able to leverage an excellent booth position to talk to many hundreds of people. One of the things that I love about my role here at CloudBolt is that I am constantly learning. I particularly enjoy speaking with customers about the types of problems they run across in their IT environments, and I take every chance I can to learn more about what their IT challenges are. Some of these are common themes that we hear a lot here at CloudBolt, and a few were a bit surprising as some organizations are still in earlier stages of their modernization efforts that I would have expected.

  1. Not everyone has heavily virtualized his or her enterprise.
    Sure, there are some environments where virtualization doesn’t make a lot of sense—such as parallelized, but tightly CPU-bound workloads, or HPC environments. But what surprised me were the number of organizations I spoke with that made little, or highly limited use of virtualization in the data center. It’s not that they didn’t see the value of it, more often than not, they still made use of Solaris on SPARC, or had old-school management that had not yet bought into the idea that running production workloads on virtualized serves has been a long-accepted common practice. For these folks and others, I’d like to introduce you to a topic I like to call “Cloud by Consolidation” (in a later blog post).
     
  2. Best-of-Breed is back.
    Organizations are tired of being forced to use a particular technology just because it came with another product, or because it comes from a preferred vendor. For example, an IT organization is pressed to use sub-optimal technology because it came with another suite of products. Forcing an ill-fitting product on a problem often results in longer implementation times, which consume more team resources over just implementing the right technology for the problem at hand. Your mechanic will confirm that the right tool makes any job easier. It’s not any different with enterprise software.
    • The gap between things like CloudForms (Formerly ManageIQ) it’s ability to manage OpenStack implementations
    • Nicira Software Defined Networking and the ability to manage it with vCloud Automation Center (vCAC, formerly DynamicOps)Either way, customers are tired of waiting as a result of vendor lock-in.
       
  3. Customers are demanding reduced vendor lock-in.
    IT organizations have a broad range of technologies in their data centers. They need a cloud manager that has the capabilities to effectively manage, not just what they have installed today, but what they want to install tomorrow. For example, a customer might have VMware vCenter today, but is actively looking at moving more capacity to AWS. Alternatively, they have one data center automation tool, and are looking to move to another (see my next point below, #4). Another scenario is not having to wait for disruptive technology to be better supported before getting to implement and test it in your own environment—while being managed with existing technology. Good examples:
     
  4. Customers are increasingly implementing multiple Data Center Automation (DCA) tools. 
    This is a bit interesting in the sense it used to be that an IT organization would purchase a single DCA technology and implement it enterprise-wide. I was surprised to hear the number of customers that were actively looking at a multiple DCA strategy in their environments. Our booth visitors reported that they primarily used HP Server Automation, and to a lesser extent BMC’s BladeLogic. Puppet and Chef were popular tools that organizations are implementing in growth or new environments—like new public cloud environments. Either way, these customers see the definitive value in using CloudBolt C2 to present DCA-specific capabilities to end users, significantly increasing the power of user self-service IT while at the same time decreasing complexity in the environment.
     
  5. Lots of people are talking about OpenStack. Few are using it.
    For every 10 customers that said they were looking at OpenStack, 10 said they were not yet using it. There’s certainly been an impressive level of buzz around OpenStack, but we haven’t seen a significant number of customers that have actively installed and are attempting to use it in their environments. I think that Red Hat’s formal entry into this space will help this, because they have a proven track record of taming the seemingly untamable mix of rapidly-changing open source projects into something that’s supportable in the enterprise. I have no doubt that Red Hat will be able to tame this into something usable. This does not, however, mean that customers will be making wholesale moves from their existing (and largely VMware-based) virtualization platforms to OpenStack. Furthermore, there are still significant market confusion in regards to what Red Hat is selling. Is it RHEV? Is it OpenStack? Do I need both? These are all questions I heard more than once from Customers in Boston.
     
  6. Open and Open Source aren’t the same thing.
    I spent too many years at Red Hat to know that this is the case, but I feel it’s extremely important to mention it here. Many customers told us that they wanted open technologies—but in these cases, open meant tools and technologies that were flexible enough to interoperate with a lot of other technologies, and reduce overall vendor lock-in. Sure, an Open Source development model could be a plus, but the customers were most interested in their tech working, working well, and working quickly.
     
  7. Most IT Orgs want Chargeback, but few businesses are willing to accept it.
    Thus far, the only groups that I’ve chatted with whom actually use some chargeback mechanism are service providers that have external customers. Pretty much every other IT Organization seems to face significant pressure from the businesses they support against chargeback. Showback pricing helps counter this resistance, and over time should help more IT organizations win the battle over chargeback. IT Organizations should be leaping at the chance to collect and report on per-group or project cost reporting. It’s a critical piece of information that businesses need to make effective decisions. Business-Driven IT has been a necessary step in the evolution of IT for a long, long time. IT needs to get with the program and make visible to the business the types of information the business needs to make effective decisions. And on the flip side, the business needs to get with the program and accept that their teams and projects will have to be held responsible for their IT consumption.

So how do you get started recognizing the value of integrating IT with the business? Start here.

We’re looking forward to exhibiting at the next Red Hat Summit, which is slated to be held in San Francisco’s Moscone North and South exhibition center. And if you thought we made a big splash at this year’s summit…  Just wait to see what we have in the works!

Read More

Topics: Virtualization, Cloud, Enterprise, Red Hat, Challenges, Vendors

CloudBolt C2 is the Cloud Manager for the Dell Cloud for Government

Posted by Justin Nemmers

6/11/13 10:57 AM

I’m thrilled to announce that CloudBolt C2 is the Cloud Manager Dell is using in their recently announced Dell Cloud for US Government. In this solution, CloudBolt C2 provides the automated workflows, provisioning, rapid scalability, and metered pricing customers need in order to become their own cloud provider.

Dell Cloud for US Government uses CloudBolt C2

The Dell solution enables organizations to take advantage of the cloud delivery model to provide a range of on-demand resources to end users in a predictable and reliable manner, all while using infrastructure that meets various US Government security criteria including:

  • NIST 800-53
  • FedRAMP
  • FISMA Low and Moderate
  • DIACAP
  • NIACAP
  • HIPAA 

This solution is being offered two ways:

  • Dedicated solution either hosted or installed in a customer environment
  • Hosted multi-tenant on-demand cloud

Either way, Dell has the ability to provide either solution in a manner that meets the broad range of security criteria Government Customers are faced with.

This solution required a powerful Cloud Manager that could not just offer an intuitive, easy-to-use interface, but also one that could just as easily support multi-tenant environments as it could single tenant ones. C2’s Section 508 compliance, robust orchestration layer and the ability to plug into nearly any required technology made it the natural and secure fit for the Dell Cloud for US Government solution.

Dell offers this solution with a flexible acquisition model: either can be purchased with enough capacity for as few as 100, and all the way up to 100,000 or more VMs. This Dell Cloud for US Government solution can be Dell hosted, installed in a customer environment, or offered as a hybrid model. No matter how you chose to consume it, the capabilities and certifications are the same. Dell has rolled in over 270 security controls to help customers attain and track any ATOs needed to run in their environments.

Dell’s FedRAMP Cloud builds on the capabilities of the NIST Dedicated Cloud solution and adds the required security controls to achieve FedRAMP certification. This Dell-hosted multi-tenant environment allows public cloud-like metered on-demand access to secure computing resources. Because this solution comes with FedRAMP certification, no additional ATOs are needed for those agencies able to run FedRAMP-approved solutions.

Dell Federal Services CTO Jeff Lush has a series of YouTube videos where he highlights the capabilities of this solution.

Customers will use CloudBolt C2 to request on-demand Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) resources, which will automatically be provisioned, tracked, and managed in an ongoing basis. Organizations that deploy the dedicated solution will get access to the full suite of C2 capabilities, including multi-cloud management, which will enable those customers to manage other Virtualization or Cloud environments as well.

CloudBolt C2’s power and flexibility were key reasons why Dell chose C2 for this solution. Interested in learning more? Give us a ring at 703.665.1060.

(FedRAMP stands for Federal Risk and Authorization Program. See more info about that here.)

(Dell is a registered trademark of Dell, Inc.)

 

 

Read More

Topics: News, Cloud, Private Cloud, Government, Vendors