GoodVision Video Insights provides greater than 95% accuracy of collected traffic data. That is for all types of vehicles, bicycles and pedestrians – they all can be on the scene together, that makes no difference. All you have to do to achieve it, is to provide a quality video input. There are certain aspects of videos which affect the quality of video analytics. So how to make sure that your videos have sufficient quality to be processed by video analytic software? Is there a way to maximize the quality of your results? Yes there is, and this guideline aims to help you once and for all.

Criterion 1: Camera View

1. Distance from monitored objects

Make sure that dimensions of vehicles to be monitored are at least 4-5% of the scene size. That meaning vehicle length should be around 60 pixels on 1280px x 720px scene. Smaller objects might be harder to detect in some cases (this is affected also by other factors — lens, blur, etc.). Also make sure the objects are not covering substantial portion (more than 30%) of the scene or it will be considered as false. How to solve this? Zoom, crop or go closer.

 2. Camera height

Camera height affects only your visual experience, not the system’s detection ability. Problem is when the camera is positioned too low causing objects in the front covering the objects in the back — which are not detected, not tracked → not collected. It is really simple to fix this issue and achieve great results. If your camera has standard 2.8 lens equipped, place it between 5 and 35 meters above the ground depending on how broad space you want to cover and which traffic attendants you want to monitor, e.g. pedestrians.

3. Camera tilt

Camera tilt doesn’t affect the detection ability of GoodVision Video Insights. It is trained and versatile for various conditions. You can tilt the camera from horizontal up to a straight-down bird-view if needed.

4. Obstacles

Obstacles are tricky. Some of them seriously affect the detection ability (trees, other cars, buildings, bridges, big traffic signs, ..) while some of them might not (thin poles, standard traffic signs, wires, ..). System looses the object while it is covered by the obstacle and after it appears again, it is often considered as a new object — causing the trajectory of the object is split.

Criterion 2: Lens Quality and Light Conditions

1. Lens quality and distortion

  • Poor lens, dirty, with scratches or smudges — causes blurry image, removes object contours or deforms it
  • Raindrops on the lens — distorts the image or causes light reflections, acts like the physical obstacle
  • Barrel distortion — deforms the objects, bends them, causing them look nothing like the system is trained for
  • Frontal light causing flare or reflection — covers the image, decreases the object clarity

2 – Scene lighting

Scene lighting plays the important role in video analytics as well, however GoodVision Video Insights is trained to recognize even objects in the dark. The only condition is that the objects must be at least a little bit illuminated to be visible in the image with the naked eye.

Criterion 3: Video Resolution

1.Resolution

The equation here is simple: “SHIT IN — > SHIT OUT” (sorry for the language). The more image data (pixels) you provide to the system, the better it recognizes the objects in it. GoodVision Video Insights is trained to deliver best accuracy on 1280px x 720px and 1920px x 1080px (FULL HD) resolutions. These are considered the ideal resolutions.

2. Object clarity

Generally GoodVision can handle lower resolutions all way down to VGA. However, lower resolutions go hand in hand with low-quality optics and low bitrate, causing the object contours are not crisp (are blurry) or do not resemble the object from the real world. Set the resolution which displays object’s contours clearly. On the left are the BAD examples of unsuitable low resolutions.

Criterion 4: Video framerate (FPS) and shutter speed

1. FPS affects object tracking

Framerate of the video defines the fluency of the object’s motion in it and greatly affects the tracking ability of the video analytic system. Tracking means preserving the identity of the object between frames on which it was detected. Tracking has the crucial impact on having the nice solid object trajectories i.e. for the origin-destination counting of traffic. Moreover, the bigger the speed of objects in the video, the worse impact has low FPS on their tracking.

2. Ideal framerate

Ideal FPS for GoodVision Video Insights which works well with most of the scenarios is between 30 to 10 frames per second at minimum. The bigger the better, however FPS bigger than 30 per second does not have any visible impact to tracking quality. Lower FPS than 10 frames per second cause tracking problems, especially in crowded scenes and with fast moving objects, which are literally “jumping” from place to place over the scene.

3. Shutter speed

Camera shutter speed affects the clarity of the moving object’s contours, especially in the low-light conditions and close to the camera. Some cameras switch to longer shutter speeds in order to keep the same overall brightness of the scene during night. Try to avoid this and rather preserve the clarity of the objects. GoodVision Video Insights is trained to recognize objects in the dark, but if the objects are too smudgy and completely lack the contours, it is super-hard to detect them.

And if all conditions are met…

As you can see, it’s actually easy to prepare a suitable video footage for GoodVision. All of the described conditions are reasonable. To summarize it all I would say: “What is not visible, cannot be seen”. In example, if the conditions above are fulfilled you can expect close to 100% traffic data collection accuracy from GoodVision Video Insights. So it is also in your hands — modern technology is here for everyone, accessible as never before, use it to the last drop. And if everything goes well, your system will reward you with amazing performance like this:

Did this answer your question?