Video Latency Explained

What is video latency? The short answer, it’s the time difference, or delay, between the moment when an event is captured from a video source and when it’s actually displayed on your video screen or monitor. Keeping that in mind, let’s compare a standard television video broadcast versus what you are looking for in a security and surveillance environment.  When you’re watching a ‘live’ event on TV for example, the average latency between what is happening on stage during a concert or on the field in the case of a sports event, is anywhere between 7 and 30 seconds. 7 seconds is generally the minimum latency a television broadcaster would intentionally add in order to allow censors to ‘fix’ anything that might go wrong. The balance of any observed latency is dependant on technologies used, video signal or resolution, etc. A High Definition (1080p), or 4K video signal, for example, is a fairly large and complex video signal, and thus has to be compressed before being sent to you, after which it has to be decompressed when received on your end to be displayed. All that processing has a cost. That cost is usually seconds added to the original 7 second minimum.

That being said, this is where IONODES’ expertise comes into play. You can now better understand why latency, or lack thereof, is a priority when it comes to security and surveillance applications. Low latency means what you see on your monitor is what is actually happening at this instant, or as close as possible. It’s important to understand that unless you are looking directly at a scene with your own eyes, there will always be latency whenever any type of technology is involved. The idea here is to keep that latency as close to non-existent as possible when it comes to security or surveillance applications.

As a general rule of thumb, video latency in the security and surveillance industry is measured in milliseconds (ms), that’s thousandths of a second. This kind of latency is barely noticeable to the naked eye when viewing on a monitor versus the actual live scene. A good rule of thumb in the surveillance industry is to aim for maximum 250ms, especially for a PTZ (pan, tilt, zoom) camera where you are controlling the camera movements remotely as well.

Although latency can be created by any component of the system itself, that is to say the camera image processing, the network and the receiver or decoder, overall latency is generally measured end to end, also often called glass to glass. What this means is the entire system from capture to display is measured as a whole. A very simple way to measure this latency would be to aim a video camera at a high-resolution numeric timer (in ms) displayed on a laptop, tablet or smartphone, for example, and have the display monitor close enough to have both within the picture frame. Snap a picture of this setup and you could then easily compare the ‘live’ timer on said laptop, tablet or smartphone in the picture, with the displayed timer displayed on the video monitor within the same picture to calculate the delay.