• Associate Professor Daniel Neill recently gave an invited plenary talk on “Machine Learning and Event Detection for Population Health” at the 2nd Annual Conference on Machine Learning for Science and Engineering. The conference highlighted advances in research that utilize methods of artificial intelligence, the development of new machine learning algorithms designed for science and engineering problems, and the ways these methods lead to innovations across various fields.
  • Congratulations to Assistant Professor Yury Dvorkin, whose Smart Energy Research (SEARCH) Group research proposal has been selected for a grant by the Alfred P. Sloan Foundation! Professor Dvorkin’s project will study the effects of information asymmetry between distribution utilities, aggregators and developers of distributed energy resources, electricity consumers, state regulators, and federal regulators on the economically efficient and technically feasible roll-out of distributed energy resources. They will also examine engineering and policy solutions to mitigate the effects of information asymmetry on emerging smart grid policy and regulatory decisions.
  • Congratulations to Industry Assistant Professor Benedetta Piantella, who was selected as a semifinalist in the American Made Solar Challenge! With her co-collaborator R. David Gibbs, Professor Piantella will receive $100,000 to test her prototype, a smart charge controller that offers flexible, scalable energy access and emergency backup power called the Solar SEED.
  • The Urban Modeling Group hosted 80 participants here at CUSP at the recent Sustainable Urban Subsurface Systems Workshop! The event, which was sponsored by the National Science Foundation, was a huge success and a great opportunity for researchers from across the country to come together around the crucial challenges and opportunities in the urban subsurface.
  • The DCASE Urban Sound Tagging Challenge that the Sounds of New York City (SONYC) project put together concluded, and the results were posted here. The goal of this challenge was to predict the presence of 23 classes of sounds in recordings from the SONYC sensor network. The data used for training and evaluation is the first set of recordings we have released from the SONYC network. 23 systems from 10 teams were submitted to the challenge. These submissions and results of the challenge will be presented during the DCASE (Detection and Classification of Acoustic Scenes and Events) Workshop that SONYC is hosting here at NYU in October:


  • Associate Professor Daniel Neill had a paper accepted to the top machine learning journal, JMLR:
  • Postdoctoral Researcher Vincent Lostanlen recently published two journal papers:
    • J. Andén, V. Lostanlen and S. Mallat, “Joint Time–Frequency Scattering,” in IEEE Transactions on Signal Processing, vol. 67, no. 14, pp. 3704-3718, 15 July15, 2019. doi: 10.1109/TSP.2019.2918992. Andén, Lostanlen, and Mallat propose a new mathematical model for sound perception. This model decomposes every sound into a “scattering network” of responses at various frequencies, time scales, and musical intervals. Just like a deep neural network, the scattering network is able to recognize musical instruments and urban sounds automatically.
    • Philip Warrick, Vincent Lostanlen, and Masun Nabhan. “Hybrid Scattering-LSTM Networks for Automated Detection of Sleep Arousals.” 2019 Physiol. Meas. in press. To diagnose sleep disorders, clinicians measure data from brain, heart, and lungs. In this context, Warrick, Lostanlen, and Homsi have developed an algorithm which detects interruptions during sleep, known as arousals. This algorithm is first in combining scattering networks with long short-term memory (LSTM) networks, two existing methods in deep learning.