Enhance vehicle safety and assist the driver.๐ฆบ
Combination of sensors, cameras, and algorithms to monitor the vehicle's surroundings.๐
Help reduce the risk of accidents and improve overall road safety.โ
[9]Observing upcoming bends.๐ญ
Classifying the direction and severity.๐
Assess Risk -> Alert driver or intervene.๐
Simulated complex roads.๐บ๏ธ
Firefighters Navigated with & without system.๐
Reported significant reduction in roll-over risk.โฌ๏ธ
[3]They used GPS and high detailed maps to find upcoming bend.๐บ๏ธ
Alerted driver with advisory speed.๐
Reduction in Risk in real-world environment.โฌ๏ธ
[4]Vision only system.๐ท
Detect and classify upcoming bend direction & sharpness.๐ฃ๏ธ
Build the system to handle real-world environments.๐
Manipulating Motion Field... Over/Under Steering Source: [5]
Gaze on R-VP when negotiating bends. Source: [2].
1. Bend classification machine Learning on RGB sequences.
2. Practicality of human-like motion fields for bend classification.
3. Impact of R-VP focused sequences.
6hr45mins of UK road footage between October 2024 and April 2025
Covering Range of: Seasons, Road types, Speeds, Weather conditions
$GPRMC,[utc_time],[status],[latitude],[ns_indicator],[longitude],[ew_indicator],[speed_knots],[course_deg],[date_ddmmyy],[mag_var],[mag_var_dir]*[checksum]
$GPGGA,[utc_time],[latitude],[ns_indicator],[longitude],[ew_indicator],[fix_quality],[num_sats],[hdop],[altitude],[alt_unit],[geoid_sep],[geoid_unit],[dgps_age],[dgps_station]*[checksum]
We focus on:
Detect bends in the dataset.๐
Label them with severity and speed.๐ท๏ธ
Clip 10, 20, 30, 40, 50, 75, 100 metres before bend.๐
| Bend | avg. Angle | avg. Speed | Start Frame | 10 Metre Frame | 20 Metre Frame | 30 Metre Frame | 40 Metre Frame | 50 Metre Frame | 75 Metre Frame | 100 Metre Frame |
| 1 | 10.63 | 30.16 | 1945 | 1924 | 1900 | 1882 | 1861 | 1843 | -1 | -1 |
| 2 | -22.31 | 22.13 | 2754 | 2733 | 2709 | 2688 | 2664 | 2646 | 2595 | -1 |
| 3 | 8.93 | 30.1 | 3221 | 3200 | 3179 | 3155 | 3137 | 3116 | 3065 | 3011 |
| 4 | -8.57 | 27.76 | 3419 | 3398 | 3377 | 3356 | 3338 | 3317 | 3266 | -1 |
โ Observed high accuracy.
โ Constant labelling (no human bias).
โ No filter of false bends: Roundabouts or Junctions.
โ Relies on quality of NMEA data.
โ Affected by noise in slow moving traffic.
Road Vanishing Point (R-VP) Estimation
Illumination/ Glare
Weather
...
Estimate the R-VP of the road.๐
Establish understand the scene.๐ฃ๏ธ
Notice high sensitivity to ego-motion.
โ Speed and Efficiency!
โ Moderately good approximation.
โ Highly sensitive to Ego-Motion.
โ Highly sensitive to feature quality.
โ Robust against moderate Ego-motion.
โ Higher stability.
โ Filter other vehicles.
โ Difficulty finding global parameters.
โ Computation cost.
29783m 49s (20.68 days)
29783m 49s (20.68 days)
โฌ๏ธ Optimisations + Parallel Processing
2644m 18s (40.56 hours)
Input variants generated
Generated samples of each input variant
Generated samples of each input variant
Train and compare models on these four input variants.
Accuracy
Class level: Precision, Recall, F1
Weighted F1-Score
Confusion Matrix
[7]
(2+1)D Convolutional neural network (CNN).
Capture spatial and temporal patterns.
Take advantage of hierarchical features.
Multi-Layers generalise by producing abstract feature patterns.
Final class prediction outputted as a dense vector.
[1, 7]Approximates 3D convolution.
Separates spatial (2D) and temporal (1D) components.
Less weights to train.
[1, 7]We split the dataset into 3 sets:
| Model | Accuracy | Loss | Weighted F1-score (f1) |
|---|---|---|---|
| Wide RGB Model (7-class) | 73.78% | 0.8675 | 0.7399 |
| Wide Optical Flow Model (7-class) | 55.55% | 1.1519 | 0.5568 |
| Narrow RGB Model (4-class) | 43.27% | 1.1907 | 0.4480 |
| Narrow Optical Flow Model (4-class) | 50.96% | 1.2072 | 0.4783 |
โ Yes, we can classify bends with high accuracy. (73.78% for RGB Wide)
โ However, requires more data to be robust.
โ Wide Optical Flow showed generalisation (55.55% Accuracy)
โ Strong class boundaries for bend direction.
โ High confusion for bend severity.
โ RGB outperforms motion field.
โ Narrow View Optical Flow performed marginally better than Narrow View RGB.
โ High confusion and poor generalisation.
โ Limited by quality of R-VP estimation.
โ Limited dataset due to hardware limitations.
High computational cost.
No Real-time.
Ego-morton introduces additional challenges.
Occlusion of road features.
Separation of bends from junctions and roundabouts.
Adaptability to high noise (such as window wipers).
Error Propagation through the pipeline.
Automatic bend labelling
Road Vanishing Point (R-VP) estimation
Dense Optical Flow - Motion fields
Dataset generation
Deep Learning Classification Models
For comparing 4 different input variants.
Publicly available for future development.
Release raw dash cam and NMEA records
Various road conditions and illumination.
Handle Unstructured and structured environments.
Filter bias by masking other vehicles.
Reduces Effects of ego-motion.
Interpreted bends from noisy GPS data.
Incorporate additional sensors to counter ego-motion [8]
Use Depth Maps Through Stereo Vision
Explore Advanced DNN Architectures Further
[1]D. Tran, H. Wang, L. Torresani, J. Ray, Yann LeCun, and Manohar Paluri, โA closer look at spatiotemporal convolutions for action recognition,โ 2018. https://arxiv.org/abs/1711.11248
[2]F. I. Kandil, A. Rotter, and M. Lappe, โCar drivers attend to different gaze targets when negotiating closed vs. open bends,โ Journal of Vision, vol. 10, Art. no. 4, Apr. 2010, doi: https://doi.org/10.1167/10.4.24.
[3]P. Simeonov et al., โEvaluation of advanced curve speed warning system to prevent fire truck rollover crashes,โ Journal of safety research, vol. 83, pp. 388โ399, 2022..
[4]S. Chowdhury, M. Faizan, and H. M. Imran, โAdvanced curve speed warning system using standard GPS technology and road-level mapping information.,โ 2020, pp. 464โ472.
[5]C. D. Mole, G. Kountouriotis, J. Billington, and R. M. Wilkie, โOptic flow speed modulates guidance level control: New insights into two-level steering.,โ Journal of experimental psychology: human perception and performance, vol. 42, Art. no. 11, 2016.
[6]S. Raviteja and R. Shanmughasundaram, โAdvanced driver assitance system (ADAS),โ 2018, pp. 737โ740. doi: https://doi.org/10.1109/ICCONS.2018.8663146.
[7]TensorFlow, โVideo classification with a 3D convolutional neural network.โ
[8]B. Guan, Q. Yu, and F. Fraundorfer, โMinimal solutions for the rotational alignment of IMU-camera systems using homography constraints,โ Computer vision and image understanding, vol. 170, pp. 79โ91, 2018.
[9]Jumaa, Bassim Abdulbaqi, A. A. Mousa, and A. A. Mousa, โAdvanced driver assistance system (ADAS): A review of systems and technologies,โ International Journal of Advanced Research in Computer Engineering & Technology (IJARCET), vol. 8, Art. no. 6, 2019.
| Resource | Link |
|---|---|
| Final Dataset | UK-Road-Bend-Classification |
| Trained Models | RGB & Optical Flow Models |
| Raw Dashcam Videos | UK-Road-DashCam |
| Component Testing Dataset | Stereo-Road-Curvature-Dashcam |
| Source Code | GitHub Repo |
| Calibration Files | Camera Calibration |
Song: Ethereal
Composer: Punch Deck
Website:
https://www.youtube.com/channel/UC3M9CX5HWSw25k5QL3FkDEA
License:
Creative Commons (BY 3.0)
https://creativecommons.org/licenses/by/3.0/
Music powered by BreakingCopyright:
https://breakingcopyright.com