[This is another guest post by Yotam Roch, who manages the QA for the Technology Business Unit. In his previous post, he shared how we test the VC240 when it comes to the remote control. This time he shares his experience testing video CODECs]
On a regular day-to-day use, most video conference endpoints are providing a “talking heads” experience (someone speaking to the camera, camera mainly focused on his head and shoulders, usually with slight movements within the frame). In spite of that, and as part of the intense test plan we have for our endpoints (The BEEHD Voice and Video Client Frameworks and the SCOPIA VC240), I was looking for a way to put them under stress and test their “stability”. More accurately, I don’t mean stressing the whole application, but rather focusing on the video codec itself.
Stressing the video codec of an endpoint means that the input from the video source should feature content which contains more movement for extended periods. In the past few years, we’ve built a set of methods to accomplish that and test the different aspects of the codec.
Point the camera to yourself and make long movements with your hands. The problem is that after about two minutes you get tired and stop.
(BTW – I saw our media experts doing the same ritual of moving and clapping hands in different directions every time they come in front of a camera, the difference being that within a few seconds they know exactly what the problem in the motion vector or quantization factor or rate control or whatever is).
Me, in a call to an MCU, making exhausting movements with my hands (before I got tired…)
The “Watch TV”
Point the camera to a PC or TV monitor that is playing a movie. This option is quite poor because of the low video quality taken by the camera pointed at a screen.
My webcam is watching a nature movie on a PC
The “Street Traffic”
Place the camera next to the window and point it to the street where there is traffic. The car movement in straight lines, in opposite directions and in relatively constant speed can put the endpoint in some stress and detect frame-rate and artifact related problems.
My webcam and my street view
This is the option that can cover all other options. The idea is to use a source for video that will be handled directly by the endpoint. This way you can use clips, from “talking heads” movies types to “action” movies or a movie of someone waving in his hand or a movie of street traffic. You can prepare a play list that contains a single movie or different movies, of the same type or a combination of types, and play it in a loop. This can be done in several ways, depending on the endpoint type:
- Standalone endpoints such as room systems, usually have an HDMI or RGB input which can be fed with various video content from a media streamer.
- Software clients using devices and peripherals from the hosting PC can use a “Virtual Webcam” such as manyCam, VCam, VirtualCamera. These programs emulate a webcam and can be fed with different inputs: video clips, DVD video, web-stream, etc.
Our XT1000 unit being fed with video content from a streamer
Testing the endpoints stability is based on the same idea of using continues moving video data, for example, by running a call for a long period of time. During the long test it is good to have the endpoint working hard and not just taking images of a static, dark room (although there are some tests where static data is preferred and will reveal a video drift or video artifacts).
The best approach is, of course, to combine them all. This is what we do on a daily basis to supply our customers with the most robust endpoints possible for any video type.