« Bespoke Granola | Home | Salt and Pepper Discs »

November 21, 2010

Scientists Decode Birdsong


From a November 16, 2010 Technology Review article: "The secret behind the beautiful songs that birds sing has been decoded and reproduced for the first time."

"The Bengalese finch's songs vary in seemingly unpredictable ways and cannot be explained by a simple Markov model. Just how the Bengalese finch generates its song is a mystery."

"Until now. By a strange coincidence, two papers appear on the arXiv this week putting forward similar explanations for the Bengalese finch's ability.... Both groups say they have decoded the statistical secrets in Bengalese finch song and have developed models that can reproduce it.

"Both teams have come up with models that have a crucial difference from standard Markov models. Instead of the simple one-to-one mapping between syllable and circuit that explains the zebra finch's song, they say that in Bengalese finches there is a many-to-one mapping, meaning that a given syllable can be produced by several neural circuits. That's why the statistics are so much more complex, they say."

"This type of model is called a hidden Markov model because the things that drives the observable part of the system—the song—remain hidden. And it can reproduce all kinds of previously mysterious features of Bengalese finch song such as the pattern of repeated sequences, the probability of observing a given syllable at a given step from the start, and the distribution of N-grams or sequences of n syllables.

The abstract of the paper featuring the illustration above follows.


A compact statistical model of the song syntax in Bengalese finch

Songs of many songbird species consist of variable sequences of a finite number of syllables. A common approach for characterizing the syntax of these complex syllable sequences is to use transition probabilities between the syllables. This is equivalent to the Markov model, in which each syllable is associated with one state, and the transition probabilities between the states do not depend on the state transition history. Here we analyze the song syntax in a Bengalese finch. We show that the Markov model fails to capture the statistical properties of the syllable sequences. Instead, a state transition model that accurately describes the statistics of the syllable sequences includes adaptation of the self-transition probabilities when states are repeatedly revisited, and allows associations of more than one state to the same syllable. Such a model does not increase the model complexity significantly. Mathematically, the model is a partially observable Markov model with adaptation (POMMA). The success of the POMMA supports the branching chain network hypothesis of how syntax is controlled within the premotor song nucleus HVC, and suggests that adaptation and many-to-one mapping from neural substrates to syllables are important features of the neural control of complex song syntax.


Read/download the paper in its entirety here.

Below, the abstract of the second paper, which includes the figure below.



Complex sequencing rules of birdsong can be explained by simple hidden Markov processes

Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical propertiesof the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable sequences, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. This property is shared with other complex sequential behaviors. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model (GMM)), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex sequences with higher-order dependencies.


Read/download the second paper here.

[via Richard Kashdan]

November 21, 2010 at 04:01 PM | Permalink


TrackBack URL for this entry:

Listed below are links to weblogs that reference Scientists Decode Birdsong:


You can see wavelets of bird, dolphin, whale, and insect songs here

Posted by: Virginia | Nov 22, 2010 5:45:40 PM

The comments to this entry are closed.