Towards Smart and Accessible Transportation Hub - Research Capacity Building and Community Engagement
This SCC-Planning project will be design a human-centered service system, using the existing cyber infrastructure of a transportation hub. Users will be assisted by an "app" for navigating complicated buildings and urban spaces. With this app, the visually impaired, whose numbers exceed 6 million individuals today, will find using public transportation services more convenient and efficient. There are significant hurdles to be overcome to realize this design, called the Smart and Accessible Transportation Hub (SAT-Hub), which will extend an individual's mobility, offer greater freedom of movement, and open new possibilities for daily living and community engagement. In addition, the SAT-Hub will provide unique, interdisciplinary training opportunities for students, including under-represented populations in STEM at CUNY City College, CUNY Borough of Manhattan Community College and Rutgers.
The planning for the SAT-Hub will bring together several technical areas: smart infrastructures and sensing and computing with transportation planning and accessibility and mobility. The project has three main objectives: (1) design a smart transportation hub framework for experimenting, debugging, and refining the smart service, (2) strengthen partnerships with community stakeholders by exploring the unique needs of their user communities through regular community engagement, and (3) identify cross-disciplinary, integrative research themes to enable the successful deployment of a solution. The City University of New York and Rutgers University will work to meet these objectives with our partners from local government in NY and in NJ, the regional transit agencies, and the service institutions for underserved users.
-
Performance PeriodSeptember 2017 - August 2019
-
CUNY City College
-
Award Number1737533
-
Lead PIZhigang Zhu
-
Co-PIWilliam Seiple
-
Co-PIJie Gong
-
Co-PICamille Kamga
-
Co-PICecilia Kelnhofer-Feeley
-
Co-PICandace Brakewood
Project Material
- Real-Time 3D Object Detection, Recognition and Presentation Using a Mobile Device for Assistive Navigation
- Improving Building Energy Efficiency through Data Analysis
- An AI-enabled Annotation Platform for Storefront Accessibility and Localization
- Context understanding in computer vision: A survey
- Exploring an Affective and Responsive Virtual Environment to Improve Remote Learning
- Real-time pedestrian pose estimation, tracking and localization for social distancing
- An Integrated Mobile Vision System for Enhancing the Interaction of Blind and Low Vision Users with Their Surroundings [An Integrated Mobile Vision System for Enhancing the Interaction of Blind and Low Vision Users with Their Surroundings]
- Precise indoor localization with 3D facility scan data
- MultiCLU: Multi-stage Context Learning and Utilization for Storefront Accessibility Detection and Evaluation
- ARMSAINTS: An AR-based Real-time Mobile System for Assistive Indoor Navigation with Target Segmentation
- SnapshotNet: Self-supervised feature learning for point cloud data segmentation using minimal labeled data
- Real-Time 3D Object Detection and Recognition using a Smartphone [Real-Time 3D Object Detection and Recognition using a Smartphone]
- A route optimization model based on building semantics, human factors, and user constraints to enable personalized travel in complex public facilities
- Impact of Labeling Schemes on Dense Crowd Counting Using Convolutional Neural Networks with Multiscale Upsampling
- Building an Annotated Damage Image Database to Support AI-Assisted Hurricane Impact Analysis
- Monocularly Generated 3D High Level Semantic Model by Integrating Deep Learning Models and Traditional Vision Techniques
- UnityPIC: Unity Point-Cloud Interactive Core
- ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People
- A Snapshot-based Approach for Self-supervised Feature Learning and Weakly-supervised Classification on Point Cloud Data
- ASSIST: Evaluating the usability and performance of an indoor navigation assistant for blind and visually impaired people
- Multimodal Information Integration for Indoor Navigation Using a Smartphone
- SAT-Hub: Smart and Accessible Transportation Hub for Assistive Navigation and Facility Management.
- Improving Dense Crowd Counting Convolutional Neural Networks using Inverse k-Nearest Neighbor Maps and Multiscale Upsampling [Improving Dense Crowd Counting Convolutional Neural Networks using Inverse k-Nearest Neighbor Maps and Multiscale Upsampling]
- Integrating AR and VR for Mobile Remote Collaboration
- Multi-level Scene Modeling and Matching for Smartphone-Based Indoor Localization
- Unsupervised Feature Learning for Point Cloud Understanding by Contrasting and Clustering Using Graph Convolutional Neural Networks
- Generalizing semi-supervised generative adversarial networks to regression using feature contrasting
- Unsupervised Feature Learning for Point Cloud by Contrasting and Clustering with Graph Convolutional Neural Network
- Dense Crowd Counting Convolutional Neural Networks with Minimal Data using Semi-Supervised Dual-Goal Generative Adversarial Networks
- ASSIST: Personalized Indoor Navigation via Multimodal Sensors and High-Level Semantic Information
- Crowd Counting with Minimal Data Using Generative Adversarial Networks for Multiple Target Regression
- A Hybrid Indoor Positioning System for Blind and Visually Impaired Using Bluetooth and Google Tango
- Building Smart and Accessible Transportation Hubs with Internet of Things, Big Data Analytics, and Affective Computing
Herbert G. Kayser Professor of Computer Science | Faculty of Computer Science PhD Program, CUNY Graduate Center | Faculty of M.S. Program in Cognitive Neuroscience, CUNY Graduate Center | Director, City College Visual Computing Laboratory (CCVCL) | Co-Director, Master’s Program in Data Science and Engineering, Grove School of Engineering