Almost every company from the transportation industry tackles the challenge of efficient traffic data collection. And practically all traffic data suppliers have previous experience with manual traffic counting or tubes on their surveys. But is this enough to satisfy the needs of traffic model calibration in the modern age?

This webinar will show you how to quickly start doing traffic surveys using traffic video analytics technology with GoodVision Video Insights platform.

Today it is evident that automated methods of collecting traffic data using artificial intelligence are in the spotlights of transport planners and municipalities. What does this mean for your current operations, how does it look like when you decide to implement smart technology to your traffic survey projects?

Q&A:

Q #1:

Are there the minimum requirements for the camera quality?

A #1:

Camera resolution depends on your use-case. It is the size of the target object that matters. For example, you can get great accuracy on the simple road with the 480p footage, but you will need 720p resolution to capture a bigger intersection with the vehicles in the back. In general, we recommend higher resolution cameras. Other important parameter is the frame rate: minimum requirement for video is 5 FPS, ideal is 15 FPS+.

Q #2:

What is the minimum camera height to get the best results? What is the visibility range of software per camera?

A #2:

There is no minimum requirement for the height, GoodVision video analytics accepts any position, tilt or height of your camera. However, there is one rule of thumb - the objects must be visible by the human eye. Place your cameras accordingly and use your common sense. If you are not sure, that's where the GoodVision Support team jumps in and can help you to sort it out. We recommend placing cameras in 5 meters and higher.

Q #3:

What about the detection accuracy of traffic counts or speed measurement accuracy?

A #3:

The accuracy of traffic counts ranges from 95% - 100%. Speed calculation depends on your precision of distance measurement, but 90% - 100% accuracy is provided by GoodVision.

Q #4:

Have you ever evaluated the false detection rate? For example: cloud, leaf, bird, shadow should be some false detection.

A #4:

Absolutely, we are evaluating the detection performance of our algorithms constantly. The technology in GoodVision Video Insights is top-notch, immune to shadows, dark, camera movement, birds, leaves, and similar distractions. The detection of objects is done by their real appearance, not by motion or by size.

Q #5:

Is it possible to get an analyst in case, where in the view of the scene are some trees, but not so dense?

A #5:

Yes, it is possible, it is all about the optimal filter placement. If the object contours are visible, it can be detected. We can help with that if needed and look at your case. Contact support@goodvisionlive.com

Q #6:

Can you detect objects at night, in difficult weather conditions ... etc.

A #6:

We detect objects in different weather conditions and at night. The system is trained also for these conditions, e.g. dark. If the object is recognizable by the naked eye, then the system will detect it. If the lens is covered that the objects are not visible, the system cannot detect them.

Q #7:

Do we have to place two lines for the speed measurement?

A #7:

Yes, you can measure speed on any segment between two lines created anywhere on the scene. However, GoodVision will soon provide a fully automated speed estimation based on the geo-referenced data in the app.

Q #8:

In terms of the speed analysis, does a longer line to line distance improve accuracy?

A #8:

The line-crossing timestamps are accurate in GoodVision, so the accuracy of speed is stable. The distance between lines does not affect the speed accuracy. Check out an article on speed estimation here.

Q #9:

Is it obtaining the data from the snapshots that we determine the snapshot interval?

A #9:

GoodVision extracts the data from your whole video. But during the analysis, you can select just the interval you need and obtain the data/results just from that interval. The same works for reports.

Q #10:

Can GoodVision analyze multiple intersections at once? If yes, how many?

A #10:

If you need to analyze and connect multiple intersections in GoodVision into one macro model, it is possible with drone video footage currently, where you can use multiple drones recording simultaneously, then connect it in GoodVision. For fixed cameras, this leads to a custom project. Contact us at support@goodvisionlive.com if you have this inquiry.

Q #11:

Can GoodVision help us organize large occasions like FIFA World Cup?

A #11:

If you want to monitor the movements of pedestrians, you can use GoodVision's system for it.

Q #12:

Can you measure post-encroachment time (PET) or time-to-collision (TTC) with this?

A #12:

This is possible after georeferencing your scene.

Q #13:

How does it detect the different types of the vehicles such as 3-wheeler?

A #13:

The system recognizes 8 classes of vehicles, including 2 types of trucks - OGV1 and OGV2 according to COBA scheme. Here is the classification guide. There is a custom development of the classes ongoing including the 3-cycles, rikshaws, and many others. If you are missing some of the classes, let us know at support@goodvisionlive.com

Q #14:

Can you detect conflicts between objects (e.g. near misses)

A #14:

We can provide this service. Collisions - some of the features are automated in the platform, some are offered as the additional service after GoodVision's processing. This service is very precise and detailed. Contact us if you have such a need.

Q #15:

Do you provide ANPR service?

A # 15:

Yes, number plate recognition is available and can be used to enrich the trajectories.

Q #16:

How long take to process 24h video per example?

A #16:

The processing time does not depend on the length of the video itself, you can upload 10 minutes, 24 hours or 1000 hours long footage, it takes 1-2 hours to process with GoodVision.

Q #17:

Can I upload several files at once?

A #17:

You can upload several hundreds of video files at once, the upload manager in the application can handle that for you automatically. The process is similar to OneDrive or GDrive upload/synchronisation. If there is any network issue or the PC is shut down during the upload, it will be continued once the PC is back on, without losing any uploads or re-doing them again.

Q #18:

Is it possible to get live analytics with the General Traffic control room for the entire City?

A #18:

Yes. For this purpose, we have a product called GoodVision Live Traffic, which provides real-time traffic monitoring for traffic control and sends events to controllers. Learn more about it here: https://goodvisionlive.com/goodvision-live-traffic/

Get in touch with us if you want to talk about a pilot project. support@goodvisionlive.com

Q #19:

Can we use GoodVision for real time traffic models, (as live feed)

A #19:

Yes, live traffic monitoring (real-time) is provided via GoodVision Live Traffic. It can provide traffic model calibration data on the fly as you need. Learn more at: https://goodvisionlive.com/goodvision-live-traffic/

Q #20:

Are there any limitations using time-lapse cameras?

A #20:

Time-lapsed cameras are supported, the original FPS of the videos must be above 3FPS. If you are using a Brinno camera - this model is supported. During the processing, you have to select the timelapse option and it will be processed and stretched accordingly.

Q #21:

Can you locate speeding vehicles and get a number plate id?

A #21:

Number plate recognition can be used to enrich the trajectories - if NP is visible. Speeding vehicles can be therefore identified.

Q #22:

Do colored thin lines represent the path of the individual vehicle?

A #22:

Yes, coloured lines represent the trajectories of each individual vehicle. Colour code represents the class of the vehicle.

Q #23:

Which are the attributes extracted in the object/trajectory detection process?

A #23:

We extract every physical aspect of object movement, into the trajectory, position, timestamps, class, colour, etc. Then we are able to provide you with counts, speed, time gaps, travel time, acceleration, density, sat flows etc.

Q #24:

In the TMC report, we can generate an origin / destination matrix?

A #24:

Yes, there is a special Origin-Destination matrix for this. Next to the TMC report in the report builder. Check out different report types in GoodVision.

Q #25:

Can we conveniently add annotations/labels that can be jointly used in queries/reports?

A #25:

Yes, you can. Also, the re-labelling of vehicle classes will be provided.

Q #26:

What are restrictions for the trial version?

A #26:

In the trial, you have a full product version available for 30-days, + free credits from us for video processing. So there are no restrictions. It is not a limited demo. What you get during the trial is what you get on your real projects, unlike some of our competitors. Contact our support to request a trial at support@goodvisionlive.com.

Q #27:

Do you offer educational version?

A #27:

Yes, we like to support research teams and universities. Reach out to us at info@goodvisionlive.com and let's have a chat.

Q #28:

How much is the cost of a credit and how much data analysis does 1 credit obtain?

A #28:

Processing of 1 hour of fixed camera footage costs 1 credit. Processing of 15 minutes of drone footage costs 1 credit. 1 credit price starts from 4 EUR/credit and this price is flat for any type of traffic scene, no matter how complicated it is. Price includes processing, archive, data storage and unlimited data analytics according to your platform plan.

Q #29:

In terms of pricing - How many credits generally would be required to analysis a t-junction (6 movements) over a 12-hour continuous period?

A #29:

12 credits - always 1 credit per hour. It is simple as that.

Q #30: Can you use your cam feeds

A #30: Yes, you can connect live camera feeds and process them in the cloud app. Or you can use the live camera feeds with our GoodVision Live Traffic sensors and process them on-site in real-time.

Did this answer your question?