A Year ago I wrote a post on The Evolution of Video Conferencing, from the AT&T Labs’ Picturephone built in 1956 to the latest and greatest endpoints that we have today at our disposal. If you’re reading my blog, you must be well aware of all the great technologies introduced to the video conferencing market in the last few years – high definition video, high definition audio, multiple displays, life-size images, multi-directional microphones, even 3D.
But if you are a video conferencing user, you probably know that there’s one thing that stayed almost the same throughout most of the past decade; some would say even regressed backwards. What I’m talking about is the way we collaborate, or share data, in a video conference.
A few months ago Sasha Ruditsky wrote a post here about data collaboration in video conferencing. Sasha wrote:
“In its simplest form data collaboration is the ability of a conference participant to present content to the rest of the conference participants. More advanced form of the collaboration involves the ability… to perform different actions on the shared content…”
Data Collaboration Abilities – Advances or Regression?
If you compare the amazing evolution (some would say revolution) in the video conferencing business for the past 20 years or so with the advances (some would say regression) in the data collaboration abilities of these endpoints, it’s quite amazing:
The T.120 standard, which was published by ITU-T between 1993 and 1995, was trying to create a self-sufficient set of data collaboration tools for a multi-party conference. It offered various applications on top of it, such as the ability to share still image and annotate on top of them, a multi-party application sharing ability, etc.
H.239, which has become the de-facto standard for data collaboration, at least in the H.323 world, came a decade later and was designed solely for the purpose of implementing the “dual video” functionality – data on one screen, video on the other. And the data channel was, and still is, just a video presentation of the data (be it a shared desktop or a Powerpoint presentation), which you can share with the other participants.
So What Are We Missing?
As Sasha wrote, both T.120 and H.239 are based on the concept of distributing an image from one endpoint to the rest of the conference participants. In this process, which is quite simple and effective, valuable abilities are lost. For instance:
- The metadata of the actual content is not available for the recipients.
- The recipients cannot store the data locally.
- The recipients cannot browse the data. Going back, for instance, in case something is unclear or forgotten.
- As the data is in real-time video, in the case whereby you arrived late to the conference or suffered network problems, some of the data may be unavailable.
This means that, as a video conferencing user, even though I am able to see what my colleagues are presenting, that’s about all I can do. The experience is totally passive. Other than watching the beautiful slides, there’s nothing much one can do with the data channel. And it’s really a shame, because it looks really great…
Dual Video on the SCOPIA Desktop.
Is There A Solution?
Now there are alternatives to H.239, which allow for better data collaboration. The main problem with these is that they require a real revolution – changing the entire infrastructure, and stop supporting existing deployments.
The video conferencing market is standardized by nature. And with the large investments in video conferencing that most companies are making these days, it seems a bit unrealistic that they would drop everything and bet on a new horse.
And who can blame them? A real solution to the problem should be an evolution – build on top of H.239, the existing data channel, a mechanism to offer these abilities that were left out, forgotten. But do it in a standard way, keeping the solution interoperable and allowing everyone to enjoy it.
Am I dreaming? Not at all. You just wait and see…