Enabling “the cloud” and the Cloud Management Interface Standard (CDMI)

Altitude Technology Group prides itself on providing “a higher perspective” on technology trends. On the macro level, to us that means observing technology trends by decade rather than month. Observing the market over the past 10 years, it’s obvious that we are completing a massive consolidation phase. More relevantly, over the past five years matured burst innovation technologies have generated new market momentum and the big fish have circled and swallowed most of the front runners in an attempt to grow slowing revenues, penetrate new markets, and capture a full-stack “soup to nuts” solution for our end-users.

The bad news is that when this happens, generally those leading product’s performance, support, and development all crawl to a stop while the corporations and suits figure out who to fire, what to keep, and what to throw out. At best, a few years later you end up with a duct taped and poorly integrated implementation that has lost all it’s unique feature and value.

Our current market however is even more out of whack. The burst “market opportunities” of what was perceived as “VDI” and “cloud” computing didn’t actually exist. And vendors ran out to create companies servicing them, neglecting that the end-user market hadn’t digested the technologies, matured to adopt them, and in most cases hadn’t even defined the terms themselves.

Over the past weeks we’ve been deep-diving on a number of soon to be release cloud storage, computing, and virtulization products.

Who’s running “VDI” in the “CLOUD”?
Who’s leveraging a “fully dynamic resource responsible virtualized on-demand computing environment?” Nobody. But at the heart of it, what’s the huge difference between VDI and more traditional methods of Server Based Computing…. not all that much. Maybe 5% of the large enterprises in the world are pushing the cutting edge in these areas. Why? Because for most of the world, those are marketing terms, in most cases pushing products that should be in beta or first version iteration at best. But they’re effective marketing engines. They’re here before their time and they’re imploring technologists everywhere to get with it or fall into uselessness and unemployment.

While VDI solutions and cloud services certainly exist, the great divide between adoption and rejection still resides in the nitty gritty. Can an enterprise currently reliant on traditional computing seamlessly leverage these next-generation technologies……. afraid not. In fact to crowbar, slap on, and crazy glue a solution together today would be near insanity for most production shops…. hence the perpetual science projects and lack of true traction for these obviously powerful infrastructure tools.

The year of 2010 – “Enter the Enablers”

The truth is there is a market opportunity because these technology deliveries do hold genuine merit. The problem is there’s a gap between the infrastructure of today and the ability to leverage the infrastructure of tomorrow. Over the past 18 months a slurry of new start-ups and redefinitions across software, appliance, storage, and hardware products are all targeting the enablement of virtualization and cloud technologies.

This is a good thing for end-users. The science project is being completed by these new entrants, providing real avenues to begin leveraging these next-generation technologies.

Over the next decade this will be delivered in two formats:

“The Cloud”

– The swiss army knife:
A slew of product solutions will hit the market over the next months providing heterogeneous support of cloud service provider products. Get your data to these devices and they will take care of the rest. Full integration with SAS and Cloud providers is integrated through baked API’s. Leaving the end-user to only configure once, deposit data, VMs, and applications, and forget. Or so they say. We’re tracking no less than six impressive vendor technologies in this space. We’ll introduce each as they hit the market and begin to prove out their value propositions. (i.e. Today Google assigns a team of developers to any enterprise google apps customer. This isn’t just to migrate customers off of existing “trusted” traditional computing, it’s also to provide added support and credibility to the cloud offering.)

-The standardized approach
With full acceptance that end-users must be led to water, standards bodies and protocol definitions are being built to enable leveraging the cloud and cloud providers. We’re paying close attention to the Cloud Data Management Interface standard from SNIA and similar protocol standardizations. Vendors that support these established protocols  will have an early leg-up on cloud infrastructure delivery. (i.e. Within 12-18 months Google and Amazon will simply have to provide support for emerging standards, the rest will be plug and play vendor product support. This will provide solid foundation for more intense and efficient migration to cloud technologies where applicable.)  Furthermore, this will lead to the cloud being leveraged independently of hypervisor enabled or hypervisor supported environments.  Any IT function could technically interface with these new standards to seamlessly leverage an external and/or distributed  flexible resource (Cloud).

From SNIA.org

“The Cloud Data Management Interface defines the functional interface that applications will use to create, retrieve, update and delete data elements from the Cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.

This interface is also used by administrative and management applications to manage containers, accounts, security access and monitoring/billing information, even for storage that is accessible by other protocols. The capabilities of the underlying storage and data services are exposed so that clients can understand the offering.”

“VDI”

-The swiss army knife:
A group of “end-point management” technologies will release heterogeneous support for flexible end-point computing delivery. Most “VDI” opportunities today, when handled correctly, become a mixture of heterogeneous vendor technologies. Building successful true VDI implementations is still somewhat of an art, requiring point solutions working in conjunction. Most enterprises find that 50-70% of “What VDI means” to them is end-point management, user management, application management, and data management, and abstracting the components that combine all those to deliver end-point computing (profile, OS, application) into independently manageable parts. Near term solutions will wrap all of that up in a bow and provide a unified interface to manage the multiple components delivering the solution.

-The standardized approach
In a few years, once the components of delivering end-point IT productivity are abstracted, they can truly be independently delivered through standard delivery. Thin, thick, laptop, and mobile end-points will be centrally managed AND provisioned, leveraging the correct delivery stack where appropriate. The key caveat being a unified management, provisioning, and efficiencies of scale. Newly ratified protocols fit to deliver this new content over existing networks will quickly gain market and product support extending this delivery flexibility to the entire enterprise.

Benefits:
Here’s the good news. This “enablement” phase in the market will serve as a mini-innovation phase, finally delivering on multiple promises of technology value to the market in tangible and production ready implementations.

Short Term:
End users will leverage these technologies where applicable. Net-new buildouts that fit the requirements can and should be based on these soon to be standard delivery models. Over-arching management suites will be leveraged to enable consolidated management of heterogeneous environments. Server based computing delivery will increase as end-point productivity is defined and supported through traditional (SBC) and next-generation delivery (VDI, App Virtualization).

Long-Term
Early adopters will enjoy mainstream support for their environments while the hesitant and risk adverse will gain built-in support for these environments through traditional products maturing to support these new standards and associated delivery protocols. (Unified end-point flexibility)

The Cloud Data Management Interface defines the functional interface that applications will use to create, retrieve, update and delete data elements from the Cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.

This interface is also used by administrative and management applications to manage containers, accounts, security access and monitoring/billing information, even for storage that is accessible by other protocols. The capabilities of the underlying storage and data services are exposed so that clients can understand the offering.

“VirtualStorm” aims to solve density, scaling, and management issues of large scale VDI implementations.

logo_vs_eur_small

symantec

New agent software combined with Symantec Workspace Virtualization reduces VDI memory and disk footprints, addresses I/O bottlenecks, and centralizes management.

Scalability, cost, and management are all three major roadblocks to mainstream adoption of VDI in the enterprise.  Everyone is sold on the idea, sold on the paradigm shift, and sold on the concept, but when it comes to medium to large scale production deployments (+/-1000 clients)…. You can count the global functional installations on two hands.

We recently had the opportunity to deep dive into a new VDI enabler named VirtualStorm that is aiming to change that, here and now.

The underlying technical challenges of VDI lie in the same technical challenges of most significant virtualized environments.  Aggregate I/O as we scale out limits the maximum densities we can achieve in these environments while still providing responsible and predictable performance to the virtual hosts.  In VDI, the problem is complicated further by the massive amounts of common OS and unique user data (Storage Requirements), management challenges, and, probably most daunting – a paradigm shift that in most cases involves a completely separate environment, separate design, and separate (usually multiple)management interfaces. In most cases this duality means your historic traditional computing users can’t leverage ANY of the benefits of next-generation virtualization, and it becomes an all or nothing.

Complicate those technical issues by the frantic and noisy VDI space filled with vendors clamoring for your attention claiming to have the “complete” VDI “stack solution” for the future….. and consumers just ain’t buying! And rightly so, at times like this it’s usually better to duck and cover, emerging with budget dollars after the smoke clears and at least a short-list of vendors remain with duct taped, but functional, implementations.

Superior Data Solutions, (a young company known for finding and evangelizing solutions before they are “Mainstream”) just completed a deep dive demonstration with us of their newest product.  They’ve partnered their IO expertise with a company in the Netherlands (DinamiQs )that built an agent to simplify and solve all the issues of large scale VDI.   They’ve developed an extremely smart VDI package that hits all the key roadblocks and delivers what might just be the silver bullet to large VDI adoption challenges.  Dubbed “VirtualStorm”, this VDI software agent claims to deliver on scalability, drastic reductions in storage and servers, and central management of ALL your end points – not just your shiny new VDI users.  To achieve this they had to work around I/O roadblocks, enable and centralize industrial application virtualization, and find a way to address hardware scaling and cost requirements…. All this elegantly enough to support the old, the now, and the new end-point technologies.

As VirtualStorm observed the challenges discussed above they realized there were key ways to make a difference.  We’ll briefly touch on their innovative approach and we’ll report more as we observe the technology in production in the field.

1. Density: VirtualStorm claims at minimum 3x’s the VM density per server as compared to any other VDI solution.  They are recommending 150-225 VMs per dual socket quad core server with 64GB of RAM.  This is with a moderate to heavy load (40- 60 processes running per VM and all 150-225 VMs running concurrently).  How do they do it? First of all they use significantly less RAM per VM (only 384MB) and secondly they reduce CPU utilization by removing the overhead created by serving applications over the network interface.  The I/O requirements of highly virtualized environments ping the host CPU due to network processing requirements.  In traditional and scalable environments this I/O was disk based, and VirtualStorm returns this traffic to disk based by offloading this processing to the more than capable Fibre Channel interfaces and off the host CPU. VirtualStorm leverages a proprietary I/O Driver that allows for instantaneous direction of an application directly from a fast disk repository to the VM over Fiber Channel.  They plug into a Symantec Workspace Virtualization repository and enhance the existing redirection functionality.  This effectively routes the host OS I/O directly to the application store via fibre channel, instead of streaming it over the network path.  By moving traffic off of the NIC, they reduce CPU overhead, this reduction, combined with the reduced RAM/VM allows for 3X the #of VMs per server.  With consistent hardware and load server CPU utilizations dive from high 90% down to sub 30% leaving the host hardware to scale out density with much more predictability.  Very impressive!

2. Hardware Requirements: Cost runs rampant due to hosted OS memory requirements, image size requirements, and user profile sprawl.  Memory and disk requirements per user are minimized by VirtualStorm through efficient design and common data across OS, user, and application is centralized.  To achieve this VirtualStorm delivers several technical breakthroughs.  First, mentioned above, a memory reduction per host from 1.5gb down to 384mb, second a static host image size of 1.2gb that centralizes and abstracts all common OS data and a small page swap area.  VirtualStorms “MES” Memory Enhancement Stack allows you to plan on a fixed 2.2GB image regardless of how large the image actually is.  The user thinks everything is local on their “D” drive, but it is not.  VirtualStorm simply points the user to a seamless central repository of applications – again leveraging Symantec Workspace Virtualization.  This last piece is so powerful because it can be leveraged across all existing end-points as well.  This provides for immediate renewed savings and ROI on your existing environment.VirtualStorm has done a terrific job of isolating the static and common data that makes up an end-point.Abstracted from the user experience, VirtualStorm is only delivering unique data to the user, and common OS, Application, and Profile data is served from a central location, again drastically reducing hardware sprawl and increasing efficiency and scalability across the enterprise.

3. Management: One management interface for historic, current, and future end-point users and computing would speed adoption of VDI and application virtualization by reducing TCO, removing migration pain, and providing a plausible end-point roadmap.  VirtualStorm integrates directly into Active Directory to provide any AD end-point or user with application and infrastructure resources.   One interface provides for all end-point and application access, enablement, patching, and deployment.  This is no small task when we consider that all the tier one VDI Stacks still require over three or four interfaces to do similar!

VirtualStorm has delivered a terrific solution to enterprises faced with existing VDI implementation challenges or those gun-shy to begin leveraging these empowering virtualization techniques.  They’ve correctly identified several of the key technical bottlenecks to VDI and packaged intelligent technical solutions in a one-of-a-kind central management interface that supports multiple hypervisors, multiple end-points, and both streaming and redirection methodologies.

If you’re considering VDI or currently having a challenge with it, we would highly recommend piloting their software solution.   If you’re working with Symantec’s Workspace Virtualization and application virtualization technology base already, you would be remiss to not overlay it with this well designed and complimentary feature-set.

Link to Optimizing VDI over the WAN

Here’s the third and last on-line webinar, for this one Michael teamed up with Quest and the Yankee Group….

Deploying Desktop Virtualization. Webinar with Yankee Group, Quest Software and Expand Networks

Featured Speakers:

  • Phil Hochmuth, Senior Analyst, Yankee Group
  • Paul Ghostine, Vice President and General Manager of the Desktop Virtualization Group, Quest Software
  • Michael Cucchi, Sr. Director of Product Marketing, Expand Networks

Virtualized Infrastructure in a box!

balloo2n

Our engineers are just completing a pre-packaged virtualization infrastructure solution for the small to medium sized business.  Rack mountable or self-standing, the systems support complete redundancy and come with three levels of pre-customization: Turn Key(Fully Configured), Built (Fully installed and ready for configuration), and un-installed.

The environment supports VMware Vsphere and Citrix Xen environments and is a perfect fit for small to medium VDI implementations along with smaller scale server requirements.

Experience the flexibility, ease of management, and increased scalability of a virtual infrastructure today!

ATG Founder Michael Cucchi lectures at VMUG Maine Thursday July 23rd, 2009

vmug

Catch Michael Cucchi speaking at the North Eastern VMWare user’s group meeting this July 23rd in Brunswick Maine!

Michael will be covering Virtual Desktop Infrastructures, Traditional Server Based Computing, and Licensing In A Virtual Environment.

Topics Include:

  • What is VDI?
  • How to determine when to utilize VDI
  • When VDI is not a good fit
  • Licensing efficiencies in virtual environments
  • End-point Assessment Strategies