Low-Power Computer-Vision-based Pedestrian Counting Research II
- Chen Feng and Paul Rothman, Assistant Professor (Feng), Director of Smart Cities and IoT (Rothman), NYU AI4CE Lab, and NYC Office of Technology and Innovation
- Google Coral team
Many City agencies are involved in the use, planning, and design of public space but good data on pedestrian flows can be hard to come by. Manual counts take coordination and planning as well as staff costs. Computer-vision (CV) counting technologies are being tested in the city now but it is already clear that the infrastructure requirements (tapping into electricity and mounting to light poles) will be a limiting factor in using this technology more broadly and particularly for shorter-term studies where the infrastructure investment is not worth the time and effort. A low-cost, battery-powered CV sensor can help fill the data gap and allow agencies to utilize privacy-protected automated counts in short-term deployments with minimal infrastructure requirements.
This is the phase 2 of the project that continued from the previous CUSP capstone project, and the focus of phase 2 is (1) to scale up the project with more real-world data collection and analysis; and (2) to improve our data visualization dashboard.
Category: Urban Infrastructure
Project Description & Overview
In recent years, many hardware manufacturers have created development boards that support low-power computer vision (LPCV) applications. In addition, there has also been a fair amount of research done within academia to create low-power models for LPCV. This proposal aims to take advantage of recent technology advances to develop a hardware device that can be battery operated and utilized by New York City agencies to count pedestrians as they move through public space in the city. As an added resource to the proposed R&D, partnering with a technology developer as a development partner is a possibility.
In terms of requirements, the device should aim to work in outdoor environments, run off a battery for 2-4 weeks (either standalone or with PV), connect to the cloud via LoRaWAN or cellular, and be able to detect at least one object type at a time, e.g. pedestrian or cyclist) and send data to a scalable dashboard.
Video and other related data to be collected through this project.
Software programming, python, HTML, and web development.
Command line skills like Linux, terminal.
Experience with deep learning frameworks (e.g., pytorch, tensorflow).
Data visualization experience to generate a map.
Data structure skills to design how our data should look like.
Learning Outcomes & Deliverables
(1) A scalable and open-sourced web-based data visualization dashboard software library/tool that shows pedestrian statistics received from our edge device over time.
(2) Real-world data acquisition using our edge device at enough number of NYC locations (e.g., traffic intersections, public plaza, etc.).
(3) Data analysis and visualization of the collected data from (2) to show the utility of the whole system (including both the edge device prototype and the dashboard).