Learning to detect an animal sound from five examples
Learning to detect an animal sound from five examples
Date
2023-08-10
Authors
Nolasco, Ines
Singh, Shubhr
Morfi, Veronica
Lostanlen, Vincent
Strandburg-Peshkin, Ariana
Vidana-Vila, Ester
Gill, Lisa F.
Pamula, Hanna
Whitehead, Helen
Kiskin, Ivan
Jensen, Frants H.
Morford, Joe
Emmerson, Michael G.
Versace, Elisabetta
Grout, Emily
Liu, Haohe
Ghani, Burooj
Stowell, Dan
Singh, Shubhr
Morfi, Veronica
Lostanlen, Vincent
Strandburg-Peshkin, Ariana
Vidana-Vila, Ester
Gill, Lisa F.
Pamula, Hanna
Whitehead, Helen
Kiskin, Ivan
Jensen, Frants H.
Morford, Joe
Emmerson, Michael G.
Versace, Elisabetta
Grout, Emily
Liu, Haohe
Ghani, Burooj
Stowell, Dan
Linked Authors
Person
Alternative Title
Citable URI
As Published
Date Created
Location
DOI
10.1016/j.ecoinf.2023.102258
Related Materials
Replaces
Replaced By
Keywords
Bioacoustics
Deep learning
Event detection
Few-shot learning
Deep learning
Event detection
Few-shot learning
Abstract
Automatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has massively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenario.
Description
© The Author(s), 2023. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Nolasco, I., Singh, S., Morfi, V., Lostanlen, V., Strandburg-Peshkin, A., Vidaña-Vila, E., Gill, L., Pamula, H., Whitehead, H., Kiskin, I., Jensen, F., Morford, J., Emmerson, M., Versace, E., Grout, E., Liu, H., Ghani, B., & Stowell, D. (2023). Learning to detect an animal sound from five examples. Ecological Informatics, 77, 102258, https://doi.org/10.1016/j.ecoinf.2023.102258.
Embargo Date
Citation
Nolasco, I., Singh, S., Morfi, V., Lostanlen, V., Strandburg-Peshkin, A., Vidaña-Vila, E., Gill, L., Pamula, H., Whitehead, H., Kiskin, I., Jensen, F., Morford, J., Emmerson, M., Versace, E., Grout, E., Liu, H., Ghani, B., & Stowell, D. (2023). Learning to detect an animal sound from five examples. Ecological Informatics, 77, 102258.