All Collections
Webinar Archive
Webinar Series
Webinar n.4 - How to achieve accuracy with automated visual analytics? 📍
Webinar n.4 - How to achieve accuracy with automated visual analytics? 📍
Valeriya Murzyna avatar
Written by Valeriya Murzyna
Updated over a week ago

How to achieve accuracy with automated visual analytics

One of the most vibrant words among traffic surveying and especially in traffic data collection is accuracy. Each party in the transport planning industry, starting with traffic surveyors, traffic modelers, or traffic managers requires highly accurate and workable data.
​
Since visual analytics is becoming a frequently used tool for automated traffic data analysis, the question of data accuracy, and how to achieve it, is becoming one of the main topics that require an adequate explanation.
​
What does the “accuracy” mean, and how is it computed in the world of traffic data collection using Visual Analytics? How to achieve desired data accuracy reliably within the project? What to keep your eyes on before filming your traffic recordings to be analyzed? These are the critical questions this webinar will cover.
​
In the webinar, you’ll learn about:
​
-What processes are behind the automated traffic data collection, and how does GoodVision Video Insights work?
​
-Understanding the difference in achieving accuracy with Visual Analytics versus the manual traffic counting
​
-What are the key factors that have an impact on the Visual Analytics outcomes
​
-How to read your traffic data in GoodVision’s visuals and how to filter your data accurately?

Q #1:

Sometimes angles of footage are limited because of the environment. Is it possible for AI to not count objects in the area, that are lost for small amount of time (for example car is behind a van for a second)?

A #1:

These are the common situations which our system can deal with, if the vehicle is moving behind the pole or if it is covered by another vehicle for a short time. Therefore the solid frames per second value is recommended. Although you should develop the quality standards in your organization to collect the traffic footage where the vehicles doesn’t overlap, to qualify for a high-quality video processing.

Q #2:

Our cameras are recording in a lower resolution than full HD, will GoodVision work with such footage, or will our videos be refused?

A #2:

What matters the most is the resolution of the objects you want to monitor. Therefore,you can achieve proper results if your monitored vehicles are big enough on the scene which resolution is lower than the recommended minimum 1280x720. However today almost all cameras are able to switch to the higher resolution GoodVision can provide declared accuracy level and detect objects of 30px size with standard processing and as low as 20px size with drone HD processing on ultra-HD footage.

Q #3:

Are you considering the possibility of adjusting the number of frames acceptable for losing the object manually?

A #3:

This is more complicated to evaluate or make a statement of a specific number as it is a matter of multiple scene conditions and traffic flow characteristics. However GoodVision algorithms are dealing well with these common misses on the quality input, so the trajectories are connected.

You should set-up your cameras to record with FPS higher than 10, that will do a good job. If you artificially increase the FPS on your already recorded video, this doesn't add any additional valuable information for processing. Once the footage with low FPS is obtained and processed, GoodVision doesn't change the FPS of your videos in any way, these files are confidential and immediately after the processing it is deleted by default.

Q #4:

What I meant is not the FPS of footage, but the number of frames acceptable for AI to not count object as another object?

A #4:

This is more complicated to evaluate or make a statement of a specific number as it is a matter of multiple scene conditions and traffic flow characteristics. However GoodVision algorithms are dealing well with these common misses on the quality input, so the trajectories are connected.

Q #5:

Is 95% accuracy guaranteed for pedestrians and cyclists as well? Or only for motorized vehicles?

A #5:

Especially for bicycles, the recommendations on camera input are more specific than for the regular vehicle traffic - the scene should capture bikes from the position that it is visible properly, and then the declared accuracy is applicable also for these types of objects, pedestrians included. Therefore we provide our clients with special guidelines before their bicycle studies. Feel free to reach out to us and test your bicycle footage on GoodVision Video Insights.

Q #6:

Is there ongoing development to make it possible to analyze OD data in big intersections with several cameras?

A #6:

Yes, this analysis is possible with our ANPR module, which is one of the recent additions to our portfolio. ANPR studies are processed using the same user’s procedure as the regular traffic footage. The requirements on camera angles and position is different for ANPR processing, but since this technology is well known on the market, users are familiar with the requirements. ANPRs are extracted from vehicles even from multiple lanes at once, including multiple directions facing the camera. As a result, the reports contain the LP information allowing to match the vehicles between different cameras and to obtain the travel time information for these individual vehicles across the monitored network.

Q #7:

Is there a maximum height for use with drones?

A #7:

The maximum recommended distance of the drone to monitored vehicles, when the drone HD processing is used, is 250m. The footage resolution, in this case, shall be 4K. When the lower heights are used (i.e. 75m), the footage can be of lower resolution (e.g. 2K). If you want to apply standard processing on drone footage, the maximum recommended distance of vehicles from the drone is about 50 meters.

Q #8:

Can GoodVision staff validate if the filters are placed correctly on request?

A #8:

Yes, to subscribers GoodVision offers the assistance service and the quality assurance on client’s projects. This includes guidance on how to correctly place the filters.

Q #9:

Are you planning integration with other pedestrian microsimulation software?

A #9:

Yes, currently the outputs of the GoodVision Video Insights analyses can be used as inputs for pedestrian simulation, like in PTV Vissim and VisWalk.Pedestrian data are treated and can be analyzed in the same way as vehicular traffic. We’d love to hear your ideas and requirements for pedestrian modeling.

Q #10:

What if the vehicles or pedestrians are moving in groups, can your system recognize the individual objects in the group?

A #10:

Yes, GoodVision’s artificial intelligence engine detects and recognizes traffic attendants by their appearance on the scene, therefore also if they are moving in groups (the congested scene with overlaps), it is enough to spot the part of the object to recognize it and track it).

Q #11:

How does the camera quality affect the accuracy of other traffic parameters like speed or travel time?

A #11:

In short - the camera quality affects the accuracy of the counts and also other traffic parameters. Accuracy of the travel parameters like speed and travel times is mainly affected by the framerate of the video, therefore we recommend to keep this parameter high, ideally around 25FPS - you should provide as much image information and as frequently as possible, to minimize the deviation. If you follow the requirements on camera quality for GoodVision, the accuracy of your analyses won’t be negatively affected.

Q #12:

How does data verification work? Is it a paid service?

A #12:

We understand, before you get familiar with the fact you now have the analytics fully in your hands, you need some assurance that your data is good and so is your filter placement. Therefore, GoodVision introduces the data verification service on your projects. Data verification is the additional service which you can request anytime during your data analysis. It is required, that your original video footage is stored in the GoodVision Vault storage in order to be verified. After the successful video processing and defining all the traffic movements you want to analyze or count the traffic on, the user can select an additional service - Data Verification. Data verification can be ordered for the whole survey, or just for the specific part of it. Verification is a paid service and it consumes credits according to the size of the verified interval.

After that, the trained staff at GoodVision will perform the verification of your data and your scene settings and will revert to you with the information about the data accuracy and the proper revised filter setting for your scene. If your video scene is below the recommended minimum requirements, you can still order the data verification and you can still obtain very accurate results. In case it is below the declared 95% and the footage is not matching the quality requirements, we will provide you with the accuracy level and the filter setting for your scene. Feel free to reach out to us directly from GoodVision Video Insights via live chat, if you want to verify the data on your survey and our staff will provide you with all the necessary details to do that.

Did this answer your question?