FARM REPORT

The Sounds of Science

Composer Chris Chafe gives new meaning to synthesizing data.

January/February 2012

Reading time min

The Sounds of Science

MUSICAL NUMBERS: Chafe's compositions are inspired by the data all around us. Here, red Pufftron sensors developed by Greg Niemeyer's group at UC-Berkeley monitor C02 levels (displayed as green traces) of recently picked pomegranates. Photo: Glenn Matsumura

At first, the music playing on Chris Chafe's laptop sounds like wind blowing through an old window frame. Then it becomes more frantic, reaching higher and higher pitches, with syncopated pops punctuating the wailing. The anxious chattering sounds almost human, like a sped-up movie reel. Suddenly, it slips into a deadened hum.

The composition does have a human source. In collaboration with the Stanford Human Intracranial Electrophysiology Program (SHICEP), Chafe, director of the Center for Computer Research in Music and Acoustics (CCRMA), created the piece by translating electrical signals cascading through the brain during a seizure into sound. "I've been surprised at how musical the data is," says Chafe, DMA '83.

The project is the latest in a series of explorations mapping data—ranging from Internet traffic properties to air quality readings to DNA sequences—to sound. Chafe notes that the way we listen to music often involves pattern recognition. "If the data is rich with those patterns, it really does turn on our musical intuitions." Beyond simply entertaining, Chafe's data-based compositions give listeners a deeper understanding of the processes that are the source and inspiration.

Chafe often starts with data capturing some condition changing over time. "These days, data flows all around us and in incredibly different forms," he says. Using software, he maps numbers to sounds, which might be recorded or computer-generated audio of instruments, voices or sound effects. When deciding on sounds, Chafe aims to evoke the world the data came from. For example, he chose synthesized human singing for the brain signal data to convey the idea of "voices in the head"; to sonify seismic data, he wrote software to mimic the cracking sound of breaking cedar shingles.

In a process that Chafe likens to a photographer trying different filters and contrast levels in the darkroom, he experiments with variables such as pitch, volume and timbre. If the data covers a long period of time, Chafe may speed up the music to convey, for instance, changes over an entire season. In other cases, such as the seizure sonification, he will leave the timing unchanged. Some of Chafe's music is generated in real time: Sensors automatically feed data to a computer, which creates music on the fly for an audience.

One of Chafe's earliest forays into sonification was Ping, a 2001 exhibit for the San Francisco Museum of Modern Art intended to demonstrate Internet data transmission speeds. Greg Niemeyer, co-founder of the Stanford University Digital Art Center, designed a Stonehenge-like circle of aluminum loudspeaker columns. Chafe converted Internet traffic data into music using sounds from the columns themselves. "One day Chris came into the studio to look at them and started tapping on the columns," recalls Niemeyer, MFA '97, now an associate professor of new media at UC-Berkeley. "And it was this really nice sound—bing bong bing—and he's like, 'Oh, this is great!'"

Ping showed that data-based music didn't have to be boring. At its extremes, data can be purely random or perfectly predictable, says Chafe, but "both of those poles [would make] kind of uninteresting music." The Internet data fell somewhere in between. "That place in the middle, between those two poles, has patterns but it also has surprise," he says—the same properties that "cause chills" in traditional music.

Chafe and Niemeyer then worked on projects involving environmental conditions. In an exhibit called Oxygen Flute, people entered a chamber where their breathing rate affected the flute music being played. The pair later collaborated on blackcloud.org, an effort to collect and sonify worldwide environmental sensor readings. Each type of sensor data—carbon dioxide levels, humidity, temperature, light, noise and concentrations of chemicals called volatile organic compounds—influenced a different aspect of the music. Chafe used the sounds of instruments from around the world, including a Chinese oboe and an African string instrument, to capture the data's global nature.

Projects such as blackcloud.org might prove useful for science communication, says Chris Field, PhD '81, a biology and environmental earth system science professor who provided guidance on Oxygen Flute. "One of the things we need is a wider range of ways to express information," he says. "I think that music is a powerful approach." Niemeyer says that Oxygen Flute showed people their participation in the carbon cycle, while blackcloud.org prompts listeners to think about climate change. "The music helps us learn how to pay attention to complex global processes."

A child in colorful clothing kneels over a red, wired box with a white emblem. STRIKING A CHORD: A child in Kathmandu interacts with a Pufftron sensor as part of the blackcloud.org project. (Photo: Eric Kaltman)

Recently, Chafe's interests have turned to the microscopic world of cells and DNA. As part of Synthetic Aesthetics, a program co-led by assistant professor of bioengineering Drew Endy, Chafe was paired with Mariana Leguia, then a postdoc in UC-Berkeley's bioengineering department. To express the makeup of a circular DNA segment through sound, Leguia recites the names of sequence parts and amino acids over skittery tapping corresponding to individual DNA units called nucleotides. The composition, played through a ring of loudspeakers, is "the equivalent of standing in the center of a piece of DNA," says Leguia. "You're experiencing the process of gene expression through sound, in the context of three-dimensional space—which is how it happens in a cell." The music might help people with only a basic knowledge of science, such as high school students, understand a complex concept, she says.

Even experts, who deal with a particular type of data every day, can gain a new understanding by experiencing it in a different modality. Josef Parvizi, director of SHICEP, contacted Chafe after seeing a Kronos Quartet performance that drew on astronomy data. He thought: "What if somebody can do the same with brain signals so that clinicians such as myself—as well as patients—could benefit?"

"Josef basically asked me to listen to a seizure," Chafe says. As proof of concept, he converted intracranial EEG signals recorded from an individual being evaluated for epilepsy surgery into sound. (The patient was personally consulted about the project and gave full consent.) Parvizi was impressed with the results. "Seizure activity destroys the harmonic background music of brain activity," he says, and in Chafe's composition "you can hear how that background peaceful activity is turned into something very dominant, a very disturbing type of sound."

According to Parvizi, who with Chafe is in the process of filing for a patent on the use of sonification as a clinical tool, the method "provides a whole new window to see pathological changes in brain rhythms and detect them in a much clearer way than your eyes could ever detect on a computer monitor." It could, for example, aid clinicians in assessing a patient's condition by enabling them to rapidly scan several days' worth of brain signal data for seizures using an MP3 player. Or it might be used to train non-experts, such as patients or family members, to listen for changes that precede seizure activity.

This isn't the first time Chafe's work has found practical use: In 2005, NASA engineers used Ping software to monitor network stability during a simulation of moon and Martian exploration in Arizona. Whether his other sonification efforts will help scientists is less clear. "Some people find it engaging and useful, and some people find it too playful," says Niemeyer of the blackcloud.org music. "They'd just rather see the numbers."

Regardless, Chafe's colleagues say, the compositions stand on their own as works of art. "Chris has been getting results that have been meaningful, which is great," says music professor Jonathan Berger, DMA '82, of the seizure sonification. "But as importantly, what I heard was truly beautiful."

While creating music from data may seem unusual, Chafe, who prefers the term "musification" to "sonification," notes that composers through the years have had many sources of inspiration, such as literature. For him, data is just "another place that music comes from."


Roberta Kwok, '01, is a freelance science writer based in the Bay Area.

Trending Stories

  1. Let It Glow

    Advice & Insights

  2. Meet Ryan Agarwal

    Student Life

  3. Neurosurgeon Who Walked Out on Sexism

    Women

  4. Art and Soul

    School of Humanities & Sciences

  5. How to Joke in a Job Search

    Advice & Insights

You May Also Like

© Stanford University. Stanford, California 94305.