Enabling “the cloud” and the Cloud Management Interface Standard (CDMI)

Altitude Technology Group prides itself on providing “a higher perspective” on technology trends. On the macro level, to us that means observing technology trends by decade rather than month. Observing the market over the past 10 years, it’s obvious that we are completing a massive consolidation phase. More relevantly, over the past five years matured burst innovation technologies have generated new market momentum and the big fish have circled and swallowed most of the front runners in an attempt to grow slowing revenues, penetrate new markets, and capture a full-stack “soup to nuts” solution for our end-users.

The bad news is that when this happens, generally those leading product’s performance, support, and development all crawl to a stop while the corporations and suits figure out who to fire, what to keep, and what to throw out. At best, a few years later you end up with a duct taped and poorly integrated implementation that has lost all it’s unique feature and value.

Our current market however is even more out of whack. The burst “market opportunities” of what was perceived as “VDI” and “cloud” computing didn’t actually exist. And vendors ran out to create companies servicing them, neglecting that the end-user market hadn’t digested the technologies, matured to adopt them, and in most cases hadn’t even defined the terms themselves.

Over the past weeks we’ve been deep-diving on a number of soon to be release cloud storage, computing, and virtulization products.

Who’s running “VDI” in the “CLOUD”?
Who’s leveraging a “fully dynamic resource responsible virtualized on-demand computing environment?” Nobody. But at the heart of it, what’s the huge difference between VDI and more traditional methods of Server Based Computing…. not all that much. Maybe 5% of the large enterprises in the world are pushing the cutting edge in these areas. Why? Because for most of the world, those are marketing terms, in most cases pushing products that should be in beta or first version iteration at best. But they’re effective marketing engines. They’re here before their time and they’re imploring technologists everywhere to get with it or fall into uselessness and unemployment.

While VDI solutions and cloud services certainly exist, the great divide between adoption and rejection still resides in the nitty gritty. Can an enterprise currently reliant on traditional computing seamlessly leverage these next-generation technologies……. afraid not. In fact to crowbar, slap on, and crazy glue a solution together today would be near insanity for most production shops…. hence the perpetual science projects and lack of true traction for these obviously powerful infrastructure tools.

The year of 2010 – “Enter the Enablers”

The truth is there is a market opportunity because these technology deliveries do hold genuine merit. The problem is there’s a gap between the infrastructure of today and the ability to leverage the infrastructure of tomorrow. Over the past 18 months a slurry of new start-ups and redefinitions across software, appliance, storage, and hardware products are all targeting the enablement of virtualization and cloud technologies.

This is a good thing for end-users. The science project is being completed by these new entrants, providing real avenues to begin leveraging these next-generation technologies.

Over the next decade this will be delivered in two formats:

“The Cloud”

– The swiss army knife:
A slew of product solutions will hit the market over the next months providing heterogeneous support of cloud service provider products. Get your data to these devices and they will take care of the rest. Full integration with SAS and Cloud providers is integrated through baked API’s. Leaving the end-user to only configure once, deposit data, VMs, and applications, and forget. Or so they say. We’re tracking no less than six impressive vendor technologies in this space. We’ll introduce each as they hit the market and begin to prove out their value propositions. (i.e. Today Google assigns a team of developers to any enterprise google apps customer. This isn’t just to migrate customers off of existing “trusted” traditional computing, it’s also to provide added support and credibility to the cloud offering.)

-The standardized approach
With full acceptance that end-users must be led to water, standards bodies and protocol definitions are being built to enable leveraging the cloud and cloud providers. We’re paying close attention to the Cloud Data Management Interface standard from SNIA and similar protocol standardizations. Vendors that support these established protocols  will have an early leg-up on cloud infrastructure delivery. (i.e. Within 12-18 months Google and Amazon will simply have to provide support for emerging standards, the rest will be plug and play vendor product support. This will provide solid foundation for more intense and efficient migration to cloud technologies where applicable.)  Furthermore, this will lead to the cloud being leveraged independently of hypervisor enabled or hypervisor supported environments.  Any IT function could technically interface with these new standards to seamlessly leverage an external and/or distributed  flexible resource (Cloud).

From SNIA.org

“The Cloud Data Management Interface defines the functional interface that applications will use to create, retrieve, update and delete data elements from the Cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.

This interface is also used by administrative and management applications to manage containers, accounts, security access and monitoring/billing information, even for storage that is accessible by other protocols. The capabilities of the underlying storage and data services are exposed so that clients can understand the offering.”

“VDI”

-The swiss army knife:
A group of “end-point management” technologies will release heterogeneous support for flexible end-point computing delivery. Most “VDI” opportunities today, when handled correctly, become a mixture of heterogeneous vendor technologies. Building successful true VDI implementations is still somewhat of an art, requiring point solutions working in conjunction. Most enterprises find that 50-70% of “What VDI means” to them is end-point management, user management, application management, and data management, and abstracting the components that combine all those to deliver end-point computing (profile, OS, application) into independently manageable parts. Near term solutions will wrap all of that up in a bow and provide a unified interface to manage the multiple components delivering the solution.

-The standardized approach
In a few years, once the components of delivering end-point IT productivity are abstracted, they can truly be independently delivered through standard delivery. Thin, thick, laptop, and mobile end-points will be centrally managed AND provisioned, leveraging the correct delivery stack where appropriate. The key caveat being a unified management, provisioning, and efficiencies of scale. Newly ratified protocols fit to deliver this new content over existing networks will quickly gain market and product support extending this delivery flexibility to the entire enterprise.

Benefits:
Here’s the good news. This “enablement” phase in the market will serve as a mini-innovation phase, finally delivering on multiple promises of technology value to the market in tangible and production ready implementations.

Short Term:
End users will leverage these technologies where applicable. Net-new buildouts that fit the requirements can and should be based on these soon to be standard delivery models. Over-arching management suites will be leveraged to enable consolidated management of heterogeneous environments. Server based computing delivery will increase as end-point productivity is defined and supported through traditional (SBC) and next-generation delivery (VDI, App Virtualization).

Long-Term
Early adopters will enjoy mainstream support for their environments while the hesitant and risk adverse will gain built-in support for these environments through traditional products maturing to support these new standards and associated delivery protocols. (Unified end-point flexibility)

The Cloud Data Management Interface defines the functional interface that applications will use to create, retrieve, update and delete data elements from the Cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.

This interface is also used by administrative and management applications to manage containers, accounts, security access and monitoring/billing information, even for storage that is accessible by other protocols. The capabilities of the underlying storage and data services are exposed so that clients can understand the offering.

“VirtualStorm” aims to solve density, scaling, and management issues of large scale VDI implementations.

logo_vs_eur_small

symantec

New agent software combined with Symantec Workspace Virtualization reduces VDI memory and disk footprints, addresses I/O bottlenecks, and centralizes management.

Scalability, cost, and management are all three major roadblocks to mainstream adoption of VDI in the enterprise.  Everyone is sold on the idea, sold on the paradigm shift, and sold on the concept, but when it comes to medium to large scale production deployments (+/-1000 clients)…. You can count the global functional installations on two hands.

We recently had the opportunity to deep dive into a new VDI enabler named VirtualStorm that is aiming to change that, here and now.

The underlying technical challenges of VDI lie in the same technical challenges of most significant virtualized environments.  Aggregate I/O as we scale out limits the maximum densities we can achieve in these environments while still providing responsible and predictable performance to the virtual hosts.  In VDI, the problem is complicated further by the massive amounts of common OS and unique user data (Storage Requirements), management challenges, and, probably most daunting – a paradigm shift that in most cases involves a completely separate environment, separate design, and separate (usually multiple)management interfaces. In most cases this duality means your historic traditional computing users can’t leverage ANY of the benefits of next-generation virtualization, and it becomes an all or nothing.

Complicate those technical issues by the frantic and noisy VDI space filled with vendors clamoring for your attention claiming to have the “complete” VDI “stack solution” for the future….. and consumers just ain’t buying! And rightly so, at times like this it’s usually better to duck and cover, emerging with budget dollars after the smoke clears and at least a short-list of vendors remain with duct taped, but functional, implementations.

Superior Data Solutions, (a young company known for finding and evangelizing solutions before they are “Mainstream”) just completed a deep dive demonstration with us of their newest product.  They’ve partnered their IO expertise with a company in the Netherlands (DinamiQs )that built an agent to simplify and solve all the issues of large scale VDI.   They’ve developed an extremely smart VDI package that hits all the key roadblocks and delivers what might just be the silver bullet to large VDI adoption challenges.  Dubbed “VirtualStorm”, this VDI software agent claims to deliver on scalability, drastic reductions in storage and servers, and central management of ALL your end points – not just your shiny new VDI users.  To achieve this they had to work around I/O roadblocks, enable and centralize industrial application virtualization, and find a way to address hardware scaling and cost requirements…. All this elegantly enough to support the old, the now, and the new end-point technologies.

As VirtualStorm observed the challenges discussed above they realized there were key ways to make a difference.  We’ll briefly touch on their innovative approach and we’ll report more as we observe the technology in production in the field.

1. Density: VirtualStorm claims at minimum 3x’s the VM density per server as compared to any other VDI solution.  They are recommending 150-225 VMs per dual socket quad core server with 64GB of RAM.  This is with a moderate to heavy load (40- 60 processes running per VM and all 150-225 VMs running concurrently).  How do they do it? First of all they use significantly less RAM per VM (only 384MB) and secondly they reduce CPU utilization by removing the overhead created by serving applications over the network interface.  The I/O requirements of highly virtualized environments ping the host CPU due to network processing requirements.  In traditional and scalable environments this I/O was disk based, and VirtualStorm returns this traffic to disk based by offloading this processing to the more than capable Fibre Channel interfaces and off the host CPU. VirtualStorm leverages a proprietary I/O Driver that allows for instantaneous direction of an application directly from a fast disk repository to the VM over Fiber Channel.  They plug into a Symantec Workspace Virtualization repository and enhance the existing redirection functionality.  This effectively routes the host OS I/O directly to the application store via fibre channel, instead of streaming it over the network path.  By moving traffic off of the NIC, they reduce CPU overhead, this reduction, combined with the reduced RAM/VM allows for 3X the #of VMs per server.  With consistent hardware and load server CPU utilizations dive from high 90% down to sub 30% leaving the host hardware to scale out density with much more predictability.  Very impressive!

2. Hardware Requirements: Cost runs rampant due to hosted OS memory requirements, image size requirements, and user profile sprawl.  Memory and disk requirements per user are minimized by VirtualStorm through efficient design and common data across OS, user, and application is centralized.  To achieve this VirtualStorm delivers several technical breakthroughs.  First, mentioned above, a memory reduction per host from 1.5gb down to 384mb, second a static host image size of 1.2gb that centralizes and abstracts all common OS data and a small page swap area.  VirtualStorms “MES” Memory Enhancement Stack allows you to plan on a fixed 2.2GB image regardless of how large the image actually is.  The user thinks everything is local on their “D” drive, but it is not.  VirtualStorm simply points the user to a seamless central repository of applications – again leveraging Symantec Workspace Virtualization.  This last piece is so powerful because it can be leveraged across all existing end-points as well.  This provides for immediate renewed savings and ROI on your existing environment.VirtualStorm has done a terrific job of isolating the static and common data that makes up an end-point.Abstracted from the user experience, VirtualStorm is only delivering unique data to the user, and common OS, Application, and Profile data is served from a central location, again drastically reducing hardware sprawl and increasing efficiency and scalability across the enterprise.

3. Management: One management interface for historic, current, and future end-point users and computing would speed adoption of VDI and application virtualization by reducing TCO, removing migration pain, and providing a plausible end-point roadmap.  VirtualStorm integrates directly into Active Directory to provide any AD end-point or user with application and infrastructure resources.   One interface provides for all end-point and application access, enablement, patching, and deployment.  This is no small task when we consider that all the tier one VDI Stacks still require over three or four interfaces to do similar!

VirtualStorm has delivered a terrific solution to enterprises faced with existing VDI implementation challenges or those gun-shy to begin leveraging these empowering virtualization techniques.  They’ve correctly identified several of the key technical bottlenecks to VDI and packaged intelligent technical solutions in a one-of-a-kind central management interface that supports multiple hypervisors, multiple end-points, and both streaming and redirection methodologies.

If you’re considering VDI or currently having a challenge with it, we would highly recommend piloting their software solution.   If you’re working with Symantec’s Workspace Virtualization and application virtualization technology base already, you would be remiss to not overlay it with this well designed and complimentary feature-set.

WAN Optimization Update – Riverbed Positioning and Rumors of an Expand Buyout?

riverbed_logo_think_fast3

See here for an article at “The Globes” (an Isreali business analyst site) discussing talks between Riverbed and Expand Networks.  It seems according to The Globes, a potential buyout is looming in the future.  If this came to fruition it would represent still further consolidation  of the WAN Optimization market place.  In honor of the rumors I figured I’d share some thoughts I wrote about Riverbed and add some assumptions on the potential Expand aquisition. (Please Note: Any commentary on a Riverbed/Expand acquisition is completely based on independent opinion and hypothesis.  Our reposting of the above article is not ATG endorsing or confirming the above article is acurate.)

Riverbed’s unique technical offering, strategic acquisitions, and product roadmap vision will all continue to contribute to their upstream success.  A strong ROI justification, regularly less than six months, has assisted Riverbed’s success as corporations adjusted to the  economic environment over the past months.

The Technology:

Riverbed’s technology offers a market leading wide-area acceleration implementation.  Their solution is a leader in the majority of the WAN optimization opportunities world-wide.  Specifically, Riverbed has enjoyed broad success in the design, manufacturing, law, and fortune 500 market segments delivering extremely impressive compression results, especially on large data moves and “Enterprize Applications”.

Riverbed’s solution was the first to utilize application protocols for accelerating end-user performance in branch offices.  In addition to traditional network caching and acceleration, Riverbed pioneered application-level acceleration.  Operating at both the network layer and application layer led to new performance impacts and opened new potential for distributing IT to distributed organizations globally.  Still today this breakthrough leads to them outperforming most other solutions on inefficient but business critical applications like Exchange, Office, and MS File Sharing (CIFS).

Their technology was only hampered and under-performing in a few key markets due to design and missing features.  1) The riverbed technology was based on hard-drive caches, resulting in some additional latency to packets as they passed the devices.  This effect is more than compensated by it’s high performance compression in most cases, but severely hampers their ability to impact latency sensitive and “interactive” application traffic. (VoIP, Citrix/SBC, and VDI to name a few),  2) Lack of industrial QoS (discussed below in their Mazu Aquisition details), and 3) lack of support for a standardized TCP Acceleration protocol (partially remedied by a tack-on OEM of Global Protocol’s  SCPS implementation.).  The latter of which has severly impacted Riverbed’s ability to penetrate the Public Sector, Satellite,  and more strict Government Networks.

Key Acquisitions and OEM Agreements:

In the past two quarters Riverbed has made several news-worthy announcements that validate their vision and execution over the past five years.  On January 20th 2009, Riverbed announced it acquired Mazu Networks.  Mazu Networks, headquartered in Cambridge, MA – USA, has developed a software product dubbed the “Mazu Profiler”.  Mazu’s Profiler provides application visibility, performance management, improved threat and compliance management, and CMDB discovery.  The product will fill Riverbed’s lack of Quality of Service and also provide some value-add to service providers by reporting use, performance issues, and security threats in alerts and detailed reports.  This functionality will give Riverbed more competitive leverage against the Bluecoat and Cisco product lines which are now challenging Riverbed’s market presence.  It will also fill some of the gaps in their QoS implementation, but not all.

On January 26th 2009 Riverbed subsequently announced their integration into the HP Procurve Router product line, representing their first router-based functionality, directly competing with the Cisco combined solution.  Riverbed has stated General Availability of the blade hosted offering as second half of 2009.  Details are still slim on this integration and we’re unclear to it’s scalability, feature set, and management specifics, and will update the field as more is learned.  Riverbed has had a long term relationship with HP beginning years ago when they hosted their appliance on HP hardware.

Lastly as pointed out above, Riverbed has also recently announced the ability to support SCPS based TCP Acceleration on their WAN Optimization products through the OEM of Global Protocol’s Skipware.  This protocol support will directly assist Riverbed in penetrating the government market segment where they have been historically weak.

An aquisition of Expand would be quite strategic for Riverbed when you add it all up.  First of all Expand’s technology works at the IP layer and will open UDP traffic optimizations to Riverbed (a TCP based device), if they so choose.  It also delivers a very functional and proven memory based caching implementation along with industrial QoS, opening Riverbed to the interactive and latency sensitive markets which include the ever-growing VDI frenzy, P2P protocols, and eventually cloud traffic types.  Along with their HP relationships, Riverbed would gain a strategic position with 3com and China’s H3c both integrated OEM relationships for Expand.  Expand’s definitive area of dominance in the Government and Satellite Networks would be a huge win for Riverbed, who has failed to penetrate both for nearly half a decade.  Lastly, Expand acquired “Netpriva” which was a little known but very impressive and well-designed client based optimization solution.  While we haven’t worked with the Expand integrated and re-designed Netpriva product, we were impressed with Netpriva’s original offering and are excited to see it hit the market.  This additional client functionality may also be a target to help bolster Riverbed’s own client offering, which hit less than stellar market adoption.  Of course, they could just be taking out a pesky competitor who’s carved out a nice niche of late in a few key markets!!!

Only time will tell, but as we discover any cold-hard facts about an acquisition we’ll post updates.

Why Compellent is holding strong among the storage giants

comp
ATG was impressed early with Compellent’s storage solutions. Their long included feature-list and more digestible price-tag caught our attention several years back. But as we flip our calendars to 2010 we’re left impressed at how they’ve IPO’d and continued to perform, enhance, and defend their growing installation base.

Why Compellent? Compellent represents one of the more stable and reliable block level virtualized storage arrays available. While it’s not everything to everyone, it certainly does what it says and does it extremely well.

Let’s take a closer look at some of their strengths.

Cost Efficiency:
Compellent has focused on cost efficiency in several areas. The product supports three tiers of traditional hard-drives, and, with a recent announcement, high-performing memory based drives. The product thin provisions storage, enabling customers to virtually assign any amount of storage to users while only requiring actual provisioning of hard-drives as physically needed. (no more huge volumes barely being utilized)

Performance Efficiency:
Compellent¹s unique ability to migrate data across multiple tiers of storage and locations within disk platters at the block level is key to their success. This automates and customizes storage performance to customer requirement without customer interaction. Multiple levels of performing/priced disk are automatically leveraged for the data type. This frees customers to utilize less high-cost disk without impacting performance, and provides vision to storage scaling, leading to more informed and efficient upgrades.

Scaling Efficiency:
Compellent¹s flexible design enables customers to design storage arrays by necessity. Mixed drives, mixed connectivity, high-end array features and an easy to use management interface make Compellent an attractive choice among it¹s more monolithic and non-flexible, model-based competition.

Roadmap Vision:
Compellent is planning to announce strong support of VMware infrastructures over the next six to twelve months. Deep integration with VMware should provide some significantly unique benefits and barriers to entry in the quickly commoditizing virtualization markets. Their ability to support the growing “cloud” computing trends is extremely important to their continued differentiation and success.

With a long feature-list and now hundreds of happy installations around the globe, Compellent has done a nice job of carving out a spot in the crowded storage market.

Link to Optimizing VDI over the WAN

Here’s the third and last on-line webinar, for this one Michael teamed up with Quest and the Yankee Group….

Deploying Desktop Virtualization. Webinar with Yankee Group, Quest Software and Expand Networks

Featured Speakers:

  • Phil Hochmuth, Senior Analyst, Yankee Group
  • Paul Ghostine, Vice President and General Manager of the Desktop Virtualization Group, Quest Software
  • Michael Cucchi, Sr. Director of Product Marketing, Expand Networks

Virtualized Infrastructure in a box!

balloo2n

Our engineers are just completing a pre-packaged virtualization infrastructure solution for the small to medium sized business.  Rack mountable or self-standing, the systems support complete redundancy and come with three levels of pre-customization: Turn Key(Fully Configured), Built (Fully installed and ready for configuration), and un-installed.

The environment supports VMware Vsphere and Citrix Xen environments and is a perfect fit for small to medium VDI implementations along with smaller scale server requirements.

Experience the flexibility, ease of management, and increased scalability of a virtual infrastructure today!

ATG Founder Michael Cucchi lectures at VMUG Maine Thursday July 23rd, 2009

vmug

Catch Michael Cucchi speaking at the North Eastern VMWare user’s group meeting this July 23rd in Brunswick Maine!

Michael will be covering Virtual Desktop Infrastructures, Traditional Server Based Computing, and Licensing In A Virtual Environment.

Topics Include:

  • What is VDI?
  • How to determine when to utilize VDI
  • When VDI is not a good fit
  • Licensing efficiencies in virtual environments
  • End-point Assessment Strategies