Crowd-AI Sensing Based Traffic Analysis for Ho Chi Minh City Planning Simulation
This activity is in response to NSF Dear Colleague Letter Supporting Transition of Research into Cities through the US ASEAN (Association of Southeast Asian Nations Cities) Smart Cities Partnership in collaboration with NSF and the US State Department. Ho Chi Minh City (HCMC), an ASEAN city in Vietnam, is well-known for its traffic congestion and high density of vehicles, cars, buses, trucks, and a swarm of motorbikes (7.3 million motorbikes for more than 8.4 million residents) that overwhelm city streets. Large-scale development projects have exacerbated urban conditions, making traffic congestion more severe. Additionally, traffic congestion is one of the leading contributors to noise and dust pollution in the city. Altogether, traffic congestion poses major barriers to urban quality of life, but the solutions are complex. There are two main problems with traffic in HCMC. First, HCMC, like other dense urban areas, needs significant financial and technical resources to solve its traffic and infrastructure problems. Second, given that traffic monitoring is carried out by a limited number of staff who watch traffic activities from thousands of camera feeds on multiple screens, there are limits to the number and effectiveness of responses that personnel are able to offer in response to real-time traffic problems.
The goal of this project is to use visual crowd AI sensing for the HCMC planning simulator. The project will make use of the city camera system (crowd-AI sensing) for traffic analysis in real-time. It seeks to detect “anomaly events” such as traffic violations, traffic jams, and accidents, with reduced intervention from monitoring staff, allowing staff, in turn, to better respond to traffic problems as they arise. A city planning simulator will be developed upon the analyzed traffic data. The simulator will be used to support metropolitan transportation planning. Project findings will not only address specific urban challenges and the innovative technical solutions needed to solve them in HCMC, but also will provide models use in other contexts, including U.S. cities where traffic, congestion, and urban infrastructure challenges can benefit from AI.
The project will be validated by professionals in HCMC who can evaluate its effectiveness for detecting anomaly events with reduced human observation, who are better able to respond to traffic problems as a result of implementing aspects of the project, and who can make use of the project data for traffic analysis.
-
Performance PeriodAugust 2020 - July 2024
-
University of Dayton
-
Award Number2025234
-
Lead PITam Nguyen
-
Co-PIPhu Phung
Project Material
- Image de-photobombing benchmark
- MaskDiff: Modeling Mask Distribution with Diffusion Probabilistic Model for Few-Shot Instance Segmentation
- A Comprehensive Analysis of Object Detectors in Adverse Weather Conditions
- Sketch-to-image synthesis via semantic masks
- S5: Sketch-to-image Synthesis via Scene and Size Sensing
- AI vs. AI: Can AI Detect AI-Generated Images?
- Anomaly Analysis in Images and Videos: A Comprehensive Review
- Multi-Output Career Prediction: Dataset, Method, and Benchmark Suite
- Abstraction-perception preserving cartoon face synthesis
- Revisiting natural user interaction in virtual world
- Image synthesis: a review of methods, datasets, evaluation metrics, and future outlook
- Photobombing Removal Benchmarking
- House Price Prediction via Visual Cues and Estate Attributes
- Public Speaking Simulator with Speech and Audience Feedback
- Chemisim: A Web-based VR Simulator for Chemistry Experiments
- Data-Driven City Traffic Planning Simulation
- Text Query based Traffic Video Event Retrieval with Global-Local Fusion Embedding
- Adaptive multi-vehicle motion counting
- Few-shot object detection via baby learning
- Contextual Guided Segmentation Framework for Semi-supervised Video Instance Segmentation
- Camouflaged Instance Segmentation In-the-Wild: Dataset, Method, and Benchmark Suite
- Mixed reality system for nondestructive evaluation training
- Masked Face Analysis via Multi-Task Deep Learning
- Traffic Video Event Retrieval via Text Query using Vehicle Appearance and Motion Attributes
- Interactive Video Object Mask Annotation
- CamouFinder: Finding Camouflaged Instances in Images
- Parsing Digitized Vietnamese Paper Documents
- MirrorNet: Bio-Inspired Camouflaged Object Segmentation
Dr. Tam Nguyen is an Associate Professor in the Department of Computer Science at the University of Dayton. His research topics include artificial intelligence, computer vision, machine learning and multimedia content analysis. He has authored and co-authored 100+ research papers with 2,200+ citations. His works have been published in prestigious journals, i.e., International Journal of Computer Vision (IJCV), IEEE Transactions on Image Processing (T-IP), IEEE Transactions on Neural Networks and Learning Systems (T-NNLS), IEEE Transactions on Multimedia (T-MM), IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), Neurocomputing (NEUCOM), Computer Vision and Image Understanding (CVIU), Journal of Computer-Aided Civil and Infrastructure Engineering (CACIE), Journal of Virtual Reality, ACM Computing Surveys, and ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). He's also published his research work in the top-tier conferences such as International Joint Conference on Artificial Intelligence (IJCAI), AAAI Conference on Artificial Intelligence (AAAI), European Conference on Computer Vision (ECCV), ACM Multimedia (ACM MM) and International Symposium on Mixed and Augmented Reality (ISMAR).