Featured by New York University.
The Sounds of New York City (SONYC) project launches its first citizen science initiative to help NYU researchers train machine listening models.
The Sounds of New York City (SONYC) —a first-of-its-kind research project addressing urban noise pollution—has launched a citizen science initiative to help artificial intelligence (AI) technology understand exactly which sounds are contributing to unhealthy levels of noise in New York City.
SONYC—a National Science Foundation-funded collaboration between New York University (NYU), the City of New York, and Ohio State University—is in its third year of a five-year research agenda and leverages machine listening technology, big data analysis, and citizen science to more effectively monitor, analyze, and mitigate urban noise pollution.
(For an overview of the SONYC project, please see this recent contributed article and video from Communications of the ACM and NYU’s press release from 2016).
The citizen science initiative, recently launched in the Zooniverse citizen science web portal, enlists the help of volunteers to identify and label individual sound sources—such as a jackhammer or an ice cream truck—in 10-second anonymized urban sound recordings transmitted from acoustic sensors positioned in high-noise environments in the city.
With the help of citizen scientists, machine listening models learn to recognize these sounds on their own, assisting researchers in categorizing the 30 years-worth of sound data collected by the sensors over the past two years and facilitating big data analysis which, in part, will provide city enforcement officials with a more accurate picture of noise—and its causes—over time. Ultimately, the SONYC team aims to empower city agencies to implement targeted data-driven noise mitigation interventions.
“It’s impossible for us to sift through this data on our own but we’ve learned through extensive research how to seamlessly integrate citizen scientists into our systems and subsequently, advance our understanding of how humans can effectively train machine learning models,” said Juan Pablo Bello, lead investigator; director of the Music and Audio Research Lab (MARL) at the NYU Steinhardt School of Culture, Education, and Human Development; and director of NYU’s Center for Urban Science and Progress (CUSP).
“Artificial intelligence needs humans to guide it—much like how a child learns by observing its parents—and we can see this training model having widespread applications in other fields. We’re incredibly grateful for the help of our volunteers,” continued Bello.
“Training machines to accurately recognize sounds is a major challenge that can put citizen-researchers at the forefront of machine learning research. This is an opportunity for New York residents—and anyone interested in how sound affects our lives—to contribute to a scientific project that will help improve our sonic environments,” said Oded Nov, an associate professor of technology management and innovation at NYU Tandon.