Only occasionally does an intelligible word emerge from the garbled sounds of "Idle Chatter," the computer-created composition of music professor Paul Lansky GS '78. Lansky played the recording in a Woolworth classroom Thursday as he kicked off his contribution to the /@rts lecture series, which explores the intersection of technology and art.
Lansky is most famous for his composition "Mild und Leise," which Radiohead incorporated into its 2000 album Kid A. He played a few minutes of the song yesterday.
"I can't listen to it anymore because it's too ugly and boring," he said jokingly. "People have emailed me saying they like the machinelike feeling of the song, but that's actually what I'm trying to get away from."
In "Idle Chatter," as in many of Lansky's computer music compositions, the lines of speech and sound blur, but the result is surprisingly musical.
According to Lansky, music created with machines will soon be indistinguishable from ordinary tunes.
"In my [computer-created] music, I've tried to distance myself from the machine. Actually, the real challenge is making music with a machine that outlasts the first listen," he said.
Lansky has used computer-based technology — especially linear predictive coding, a speech analysis technique used in cell phones — to create music which models itself on real-life sounds. Lansky has modeled his music on a variety of sounds, including everyday human speech, the purr of a car engine and a recitation of Shakespeare's "The Tempest."
Lansky interspersed clips of his contributions with a brief history of the development of computer music. He fondly recalled memories of the Columbia-Princeton Electronic Music Center of the 1960s and '70s, a collaboration between the universities in what was then cutting-edge artistry.
"The music department was the biggest user of [the one computer on Princeton's campus] until 1985 or so," Lansky said. "We had a sense we were taking part in a revolution."
Perry Cook, a University professor of computer science and music, said the greatest challenge facing technologists and artists who collaborate today is to keep innovating. "During the decades that [Lansky] talked about, musicians were using these new technologies at the same time that engineers were publishing papers on them," he said. "What's underway now is hopefully still as exciting."
Though Lansky's music is relatively traditional for the field of computer music, it includes some surprising elements as well. "If you're a composer and want more mystery in your music, working with machines is the way to do it," he said.
In the past, the /@rts lecture series has featured speakers including the installation video artists Jennifer and Kevin McCoy and the multimedia artist George Lewis, a recipient of the MacArthur Fellowship, or "genius grant."
"A place where technologists and artists can meet doesn't really exist," said Lorene Lavora, manager of education and outreach services for OIT, which is cosponsoring the series. "There are pockets of this kind of collaboration, like what Lansky does, but there could be a lot more of this right-brain, left-brain meeting."
The idea for the lecture series emerged from a discussion on engineering and culture during a School of Engineering strategic planning meeting, according to Engineering School Dean Maria Klawe.
"What [Lansky] is doing is very accessible to people who aren't attuned to contemporary computer music," said Matt Hoffman GS, a graduate student in the music department.
Reader Comments (0)
No comments yet. Be the first to post your opinion on this article.