Keynote speakers are presented in alphabetical order, some short CV are into the Organizers'CV section.
Prof. Christopher Clark, W - Cornell University, NY, USA
C. W. Clark1; P. J. Dugan1; Y. A. LeCun2; S. M. Van Parijs3; D. W. Ponirakis1; A. N. Rice1
"Application of advanced analytics and high-performance-computing technologies for mapping occurrences of acoustically active marine mammals over ecologically meaningful scales"
1Bioacoustics Research Program, Cornell University, 159 Sapsucker Woods Road, Ithaca, New York 148504, USA
2The Courant Institute of Mathematical Sciences, New York University, 715 Broadway, New York, New York 10003, USA
3Northeast Fisheries Science Center, Woods Hole Oceanographic Institute, 166 Water Street, Woods Hole, Massachusetts 02543, USA
Marine mammals are adapted to produce and perceive a great variety of sounds that collectively span 4-6 orders of magnitude along the dimensions of frequency, time and space. Thus, for example, blue and fin whales produce intense, long, very-low-frequency songs that can be acoustically detected and tracked at ranges of 1500 miles over periods of many weeks. In contrast, sperm whales hunting for squid at half-mile depths produce intense, very short, broadband echolocation pulses that can be acoustically detected and tracked at ranges of a few miles over periods of hours. This perspective leads to two important concepts referred to here as acoustic ecology and acoustic habitat; where acoustic ecology is the study of the acoustics involved in the interactions of living organisms, while acoustic habitat is the ecological space that is acoustically utilized by a particular species. Marine mammals are dependent on access to their normal acoustic habitats for basic life functions, including communication, food finding, navigation and predator detection. Acoustic masking from anthropogenic sounds (vessel noise, energy exploration, commercial activities) can result in measurable losses of marine mammal acoustic habitats. Masking leads to a reduction in the space within which an animal effectively operates, which is ecologically a reduction in an animal’s acoustic habitat. Traditional mechanisms for detecting, classifying and analyzing acoustically active marine mammals are insufficient for mapping the ecological scales over which animals normally operate and anthropogenics influence their acoustic habitats. Here we process a relatively large acoustic data set (40 months, 6-10 channels) using advanced detection-classification analytics combined with a high-performance-computing system to explore the spatio-temporal dynamics for a suite of acoustically active marine mammals (fin, humpback, minke, and right whales) and a fish species (haddock) whose sounds can be confused with whales. The results yield insights into mechanisms for optimizing the analytical system as well as dynamic maps and metrics that describe the species-specific, spatio-temporal variability for these acoustically active animals as well as the spatio-temporal variability of their background noise environments. When considered from the large-scale, ecological perspective, these results point to an entirely novel approach for analyzing, visualizing and understanding ocean acoustics at scale.
Prof. D. Sheldon and T. G. Dietterich - Oregon State University, USA
"Machine learning and Ecology"
This talk will discuss current work and open problems in applying machine learning to conservation ecology. It will begin with a broad overview of challenges and opportunities for machine learning in ecology. It will then discuss two example problems: approximate Bayesian inference to infer the velocities of migrating birds from weather radar data, and species distribution modeling. Finally, it will highlight the important role of latent process models in ecology and discuss some of the algorithmic challenges related to these models.
The work discussed in the talk is joint work between University of Massachusetts Amherst, Oregon State University, and the Cornell Lab of Ornithology.
Short Bio:Daniel Sheldon is an assistant professor in the School of Computer Science at the University of Massachusetts Amherst. The primary goal of his research is to develop new algorithms to understand and make decisions about the environment using large data sets. He leads the UMass portion of the NSF-funded BirdCast project for developing novel machine learning algorithms to model and forecast bird migration, in collaboration with Oregon State University and the Cornell Lab of Ornithology.
"Sparse operators for deformed marine or terrestrian bioacoustic event classification / challenges in bird and whale cocktail party labeling"
Hervé Glotin; Joseph Razik; Sébastien Paris
We first recall the machine learning baseline developped for automatic speech classification.
We discuss on efficient approaches for classification of animal sound units : sparse coding. We illustrate their advantages with various cases of species, from birds to whales.
For example, since Humpback whale calls present several similarities to speech, including voiced and unvoiced type vocalizations, a great variety of methods have been used to analyze them. Most of the studies of these songs are based on the classification of sound units, however detailed analysis of the vocalizations showed that the features of an unit can change abruptly throughout its duration making it difficult to characterize and cluster them systematically. We then show how sparse coding can help to determine in a song the stable components versus the evolving ones. This results in a separation of the song components, and then highlights song copying between males.
We finaly discuss how such combined models are relevant for the derivation of statistical algorithms for solving ill-posed inverse problems like the source localisation, applied to bird or to whales. We'll present a challenge on 3D whale localization using passive acoustics to illustrate this perspective.
"Classification of Mysticete Sounds: Extracting spectro-temporal structures of calls using spare architectures"
Classification of mysticete sounds has long been a challenging task in the bioacoustics field. The
diverse nature of the signals due to the inherent variations as well as the use of different recording
apparatus and low Signal to Noise Ratio conditions, often lead to systems that are not able to
generalize across different species and require either manual interaction or hyper-tuning in order to
fit the underlying distributions. This talk presents a Restricted Boltzmann Machine (RBM) and a
Sparse Auto-Encoder (SAE) in order to learn discriminative structure tokens for the different calls,
which can then be used in a classification framework.
Sample of similar talk available here
Prof. Y. Bengio - Department of Computer Science and Operations Research Canada Research Chair in Statistical Learning Algorithms
"Deep Learning : Looking forward"
Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead.
This talk proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these challenges for AI applications such as those involving images, text or acoustics.
Accompanying paper: http://arxiv.org/abs/1305.0445
Prof. Diana Reiss - Hunter College - CUNY, NY USA
"Gaining insights into the structure and use of dolphin whistle repertoires"
In sharp contrast with descriptions of contact calls in all other species, the contact or cohesion calls used by bottlenose dolphins, Tursiops truncatus, in contexts of social isolation have been historically described as individually distinctive and categorically different whistle types, termed "signature whistles". These whistle types have been proposed to function as labels or names of conspecifics. Other studies have reported an absence of signature whistles and have demonstrated that dolphins, like other species, produce a predominant shared whistle type that probably contains individual variability in the acoustic parameters of this shared whistle type. To further understand the discrepancies between different studies on dolphin whistle communication and the vast differences reported between the isolation calls of dolphins and other species, we conducted a study replicating the approach and methodologies used in the studies that originally and subsequently characterized signature whistles. In contrast to these studies, we present clear evidence that, in contexts of isolation, dolphins use a predominant and shared whistle type rather than individually distinctive signature whistles. This general class of shared whistles was the predominant call of 10 of the 12 individuals, the same shared whistle type previously reported as predominant for individuals within both socially interactive and separation contexts. Results on the further classification of this predominant shared whistle type indicated that 14 subtle variations within this one whistle type could be partially attributed to individual identity.
Short Bio: Prof. Reiss earned her Ph.D. in Speech and Communication Science from Temple
University and is an internationally recognized researcher in animal cognition and communication.
In 1982, she developed a laboratory at Marine World in California, where she investigated the
nature of dolphin communication and cognitive abilities.
Her research focuses on marine mammal cognition and communication, comparative animal cognition, and the evolution of intelligence. Her past work includes cognitive studies with interactive keyboards with dolphins to investigate their learning and communicative abilities, research in mirror self-recognition in marine mammals, marine mammal vocal repertoires and vocal and behavioral development in dolphins. Her work also involves the rescue and rehabilitation of stranded marine mammals. She was one of the scientists instrumental in the campaign to protect dolphins from being killed in tuna nets that resulted in the labeling of "dolphin safe" tuna.
Prof. Reiss’s work has been published in numerous international scientific journals and book chapters and has been featured in many television science programs, included Nature, National Geographic, Wild Kingdom, the Today Show and several BBC nature shows.
Prof. Reiss au lephant, The fallacy of "signature whistles" in bottlenose dolphins: a comparative perspective of "signature information" in animal vocalizations, Mirror self-recognition in the bottlenose dolphin: A case of cognitive convergence, and others.
Prof. Gianni Pavan Pavia - ItalyCentro Interdisciplinare di Bioacustica e Ricerche Ambientali, Department of Earth and Environment Sciences, University of Pavia, Italy, Gianni.email@example.com
Monitoring bioacoustic diversity for research, conservation and education
Bioacoustics is an emerging technology in biodiversity science and conservation: from the recognition and monitoring of individual species through to soundscape description in terrestrial and aquatic environments, it provides new insights and approaches.
However, the complexity of the acoustic world is difficult to manage and requires new dedicated smart algorithms to process the data and extract useful and easy to handle information.
Soundscape analysis, or sonic environment analysis, also provides insights into the noise pollution problem. Natural soundscapes can be contaminated by the noise produced by human activities; this may produce behavioural and physiological changes and interfere with the communicative sounds used by animals (masking). Noise may have a severe impact on their life and an impact on natural habitats; this is particularly true in the underwater environment where sound propagates well and animals use sound as a primary system to communicate, navigate and find food.
Examples of sound monitoring and sonic environment analysis will be presented in the framework of wildlife conservation and acoustic ecology issues.
Prof. Ofer Tchernichovski - Hunter College - CUNY, NY, USA
"Physiological brain processes that underlie song learning"
Sleep affects learning and development in humans and other animals, but the role of sleep in developmental learning has never been examined. Here we show the effects of night-sleep on song development in the zebra finch by recording and analysing the entire song ontogeny. During periods of rapid learning we observed a pronounced deterioration in song structure after night- sleep. The song regained structure after intense morning singing. Daily improvement in similarity to the tutored song occurred during the late phase of this morning recovery; little further improvement occurred thereafter. Furthermore, birds that showed stronger post-sleep deterioration during development achieved a better final imitation. The effect diminished with age. Our experiments showed that these oscillations were not a result of sleep inertia or lack of practice, indicating the possible involvement of an active process, perhaps neural song-replay during sleep. We suggest that these oscillations correspond to competing demands of plasticity and consolidation during learning, creating repeated opportunities to reshape previously learned motor skills.
Short Bio: Ofer Tchernichovski is a professor at Hunter College - CUNY. His research uses the songbird to study mechanisms of vocal learning. Like early speech development in the human infant, the songbird learns to imitate complex sounds during a critical period of development. The adult bird cannot imitate any more - we do not know why. His lab studies the animal behavior and dynamics of vocal learning and sound production across different brain levels. The lab aims to uncover the specific physiological and molecular (gene expression) brain processes that underlie song learning. He has extensive publications in Nature and Science as Nature Letter Vol 459, 28 May 2009, "De novo establishment of wild-type song culture in the zebra finch"
Dr. Peter J. Dugan - Cornell University, NY, USA
P. J. Dugan (1), C. W. Clark (1), Y. A. LeCun (2), S. M. Van Parijs (3), D. W. Ponirakis (1), M. Popescu (1), M. Pourhomayoun (1), Y. Shiu1, A. N. Rice (1)
(1) Bioacoustics Research Program, Cornell University, NY USA
(2) The Courant Institute of Mathematical Sciences, New York University, USA
(3) Northeast Fisheries Science Center, Woods Hole Oceanographic Institute, MA USA"Practical considerations for using high performance computing for applied detection classification on continuous-passive-acoustic data"
From biology to technology, the rate of data collection often far exceeds the ability to process the information. Processing large data sets is becoming a major point of interest for every field of science. The ease of digital data collection allows for the capture of many terabytes of data, yet this often creates major computational bottlenecks when trying to analyze such datasets. This talk focuses on a new system developed by Cornell University that uses high performance computing (HPC), and combines it with parallel and distributed processing approaches to process large amounts of bioacoustic data.
This work will discuss how the HPC system was developed using commercial off the shelf (COTS) tools creating a flexible client-server model that is expandable, flexible and portable. The presentation will demonstrate a strategy for providing a flexible software interface for running a plurality of data mining algorithms using a dense computer cluster called the Acoustic Data Accelerator, or HPC-ADA. In addition, a variety of tools have been developed to complement the system, providing efficient methods for data processing.
The authors will also summarize a specific example for processing multiple months of multi-channel, continuous data recorded in the Stellwagen Bank National Marine Sanctuary, MA, USA. Results show distinct seasonal distribution patterns of species-specific vocalization for right whales (Eubalaena glacialis) and minke whales (Balaenoptera acutorostrata).
These examples will also show other related acoustic activity from a variety of other marine animals. Results from these data products illustrate daily and seasonal patterns as shown across multiple sensors. As the scale of data collection continues to expand (the bioacoustics community will soon be faced with the challenges of processing pedabytes of data), such high-throughput computational approaches will be essential in bringing passive acoustic monitoring and analysis into the realm of big data science.