Saturday, December 12, 2009

The tale of Lucasfilm & Skywalker Ranch (by Philip Bloom)


Skywalker Ranch from Philip Bloom on Vimeo.

Filmed at Skywalker Ranch on the Canon 5DmkII conformed to 24p and the Canon 7D shooting native 24p.

Timelapse done using stills.

Music is Venus by Gustav Holst.


Read more about this excellent, stunning work via philipbloom.co.uk. Pure magic!

Friday, December 11, 2009

Time Machine - part II: Mel Blanc


"The man of a thousand voices"

Melvin Jerome "Mel" Blanc (1908 – 1989) was an American voice actor and comedian. Blanc is best remembered for his work with Warner Bros. during the so-called "Golden Age of American animation" (and later for Hanna-Barbera television productions) as the voice of such well-known characters as Bugs Bunny, Daffy Duck, Porky Pig, Sylvester the Cat, Beaky Buzzard, Tweety Bird, Foghorn Leghorn, Yosemite Sam, Wile E. Coyote, Barney Rubble, Mr. Spacely, and hundreds of others.

[Read: Mel Blanc @ IMDb]
[Watch: The Voices of Mel Blanc on YouTube]

Some Video Goodies:

Mel blanc on Latenight with Letterman circa 1981


Mel Blanc on The Tonight Show with Johnny Carson


Mel Blanc 1908-1989


Listen (.m4a) to a recording of a speech by Blanc at the 1966 Annual Awards Luncheon of the Station Representatives Association. Titled "Mel Blanc Takes A Humorous Look At Commercials: Past, Present and Future (Who The Hell Is Mel Blanc?)", this record is a hilarious glimpse at both the advertising industry and a little known aspect of the career of one of the most famous voice actors of all time.
[via animationarchive.org]

Trivia: Mel Blanc was auditioned by director George Lucas to provide the voice for the C-3PO character, and it was he who ultimately suggested that the producers utilize Anthony Daniels' own voice in the role.

Related Post: Time Machine - part I: Murray Spivak

Tuesday, November 24, 2009

An interview with Paul Frommer, Alien Language Creator for Avatar

by Matteo Milani, U.S.O. Project - Unidentified Sound Object, November 2009

U.S.O. Project meets Paul Frommer, linguist and developer for the long-awaited film Avatar with James Cameron of the whole language and culture for the fictional indigenous race of Pandora called Na’vi.

Fictional languages are by far the largest group of artistic languages. Fictional languages are intended to be the languages of a fictional world, and are often designd with the intent of giving more depth and an appearance of plausibility to the fictional worlds with which they are associated, and to have their characters communicate in a fashion which is both alien and dislocated.


[Paul R. Frommer - marshallapps.usc.edu]

Matteo Milani: Can you describe your background activities and your previous experiences at USC before working with James Cameron?

Paul Frommer: My undergraduate degree, from the University of Rochester in New York, is in Mathematics. Soon after graduating, I spent two years in Terengganu, Malaysia as a United States Peace Corps volunteer, where I taught English as a Second Language and also mathematics, the latter in the Malay language. Although I had studied foreign languages prior to that (Hebrew, French, Latin, German), it was during my time in Malaysia that I really fell in love with languages. I decided to do my graduate work in linguistics and entered the doctoral program at USC.
While I was a graduate student, I had the opportunity to teach for a year in Iran, which was a wonderful experience. Returning to USC, I completed my dissertation on a topic in Persian grammar. Then, after several more years of teaching, I switched careers and entered the business world, becoming a strategic planner and business writer for a Los Angeles corporation.

My return to academia led me in a new direction: business communication. I joined USC’s Marshall School of Business as a full-time faculty member in 1996, teaching in the department now known as the Center for Management Communication. I became chair of that department in 2005 and served in that capacity until 2008.


MM: Traveling back to 2005, how and when did you meet the director?

PF: During the summer of 2005, Lightstorm Entertainment, James Cameron’s production company, sent an e-mail to the USC linguistics department inquiring about a linguist who might be able to develop an alien language for a new movie. That e-mail was forwarded to me, and I jumped on it. I expressed my strong interest in the project and sent Cameron a copy of the linguistics workbook I had co-authored — Looking at Languages: A Workbook in Elementary Linguistics. A week or two later I was called in for an interview. I spent a very stimulating 90 minutes with Cameron in his offices in Santa Monica, where we discussed his vision for the movie and the language. At the end, I was thrilled when he shook my hand and said, “Welcome aboard.”


MM: What were his initial requests?

PF: Well, he wanted a complete language, with a consistent sound system (phonology), word-building rules (morphology), rules for putting words together into phrases and sentences (syntax), and a vocabulary (lexicon) sufficient for the needs of the script. He also wanted the language to be pleasant sounding and appealing to the audience.

"We created the language of the Na’vi starting about the time that I was doing the shooting draft of the script [...] Dr. Paul Frommer, who was with USC (University of Southern California) at the time, spent about a year creating the language. The trick was we had the language before we actually cast most of the parts. So the casting director, Margery Simkin, had to learn a bit of Na’vi so that she could get the auditioning actors to repeat the sounds of the language. If they couldn’t make the sounds, they couldn’t have the part.     
The studio asked me the same question. They asked, “Do they have to have tails?” We’re very happy with the way the Na’vi worked out because what we found is the tail and the ears show the characters’ emotional state. A cat owner knows that you can tell a cat’s mood by what its tail is doing. Just as we created a verbal language, we created a vocabulary for the tail and the ears."
[James Cameron - via inquirer.net]

"I've discovered over the years that a voice needs to sync with body movements as precisely as it does with lip movement, in order for the sound to most effectively bond with the character."
[Ben Burtt - an excerpt from Galactic Phrase Book & Travel Guide: Beeps, Bleats, Boskas, and Other Common Intergalactic Verbiage]



MM: Can you reveal the process of creating the Na'vi language? What are the major difficulties in creating a phonetic system with its own style, consistency, and unique character?

PF: I didn’t quite start from zero, since Cameron had devised 30 or 40 words of his own for the original script—some character names, place names, names of animals, etc. That gave me a bit of a sense of what kinds of sounds he had in mind.

The next step was to develop the phonetics and phonology—the sounds that would and would not appear in the language, along with the rules for combining sounds into syllables and words and the pronunciation rules that might in certain circumstances change one sound into another. The major constraint, of course, was that although Na’vi is an alien language, it has to be spoken by human actors, and so the sounds it included had to be ones that the actors would be able to reproduce.

To create some interest, I included a group of sounds not often found in western languages—“ejectives,” which are popping-like sounds that I notated as kx, px, and tx. I also needed to determine what other elements in the language would be “distinctive”—that is, would be able to differentiate words: for example, stress (the eventual answer was yes), vowel length (no) and tone (no). I presented Cameron with three different “sound palettes” or possibilities for the overall sonic impression of the language—he chose one, and we were off!

The next step was to decide on the morphology and syntax. For those, I was on my own. Since this was an alien language spoken on another world, I wanted to include structures and processes that were relatively rare in human languages but that could be acquired by humans, since according to the plot of the movie, a number of humans had learned to speak the language. The verbal morphology, for example, is achieved exclusively through infixes, which are less common than prefixes and suffixes. And the nouns have a system of case marking, known as a tripartite system, that’s possible but quite rare in human languages.

"Overall, the creation of alien languages has been the hardest task. A language, or more accurately, the sensation of language, has to satisfy the audience's most critical faculties. We are all experts at identifying the nuances of intonation. Whether we understand a given language or not, we certainly process the sound fully and attribute meaning--perhaps inaccurate--to the emotional and informational content of the speech. Our minds are trained to recognize and process dialogue. The task, therefore, of creating a language is all the more difficult because of the strength of the audience's perception."
[Ben Burtt - an excerpt from Galactic Phrase Book & Travel Guide: Beeps, Bleats, Boskas, and Other Common Intergalactic Verbiage]



MM: How did you make the Na'vi dialogue sound "real"? How difficult was it to make the dialogue believable?

PF: Well, that was really more a matter for the actors—and it was quite a challenge. They had to learn their lines in a language no one had ever heard before, including learning to make unusual sounds and sound combinations, and then they had to act convincingly in that language! That involved not only memorizing the sentences but mastering the stress and intonation, so that they could place emphasis in the right place. It wasn’t easy, but they did a remarkable job.

I met with all seven of the Na’vi-speaking actors off-set before their scenes were shot to help them with the pronunciation, and I also supplied recordings in the form of mp3 files so that they could listen to and absorb the dialog.


MM: Is there a “gold standard” for constructed language that served as an inspiration to you?

PF: In terms of “alien” languages, that would have to be Klingon, the language developed by linguist Marc Okrand for the Star Trek series. It’s a very impressive piece of work—a rough-sounding language with a complex and difficult phonology and grammar that now has a devoted base of followers. There are Klingon clubs all over the world where people meet to speak the language, and there’s even a translation of Hamlet into Klingon! If Na’vi ever came close to that kind of following, I’d be delighted.


MM: Do the Na’vi have their own alien writing system?

PF: No, the Na’vi don’t have a writing system, so that was one thing I didn’t have to bother with. But of course I needed to devise a consistent orthography, based on the Roman alphabet, to write down the language for descriptive purposes and transcribe the words and sentences for the actors.


MM: Did you develop a vocabulary?

PF: Yes, I developed it on an as-needed basis. That is, the words I came up with first were those that appeared in the script. This past May, when I translated dialog for the Avatar video game, I faced new situations that required words I didn’t yet have, so that was an opportunity to expand the lexicon further.


"Part of my research was to identify interesting real languages to use as a basis for alien ones. The advantage of using a real language is that it possesses built-in credibility. A real language has all the style, consistency, and unique character that only centuries of cultural evolution can bring. I found that if I relied on my familiarity with English, my imagined "alien" language would just be a reworking of the all-too-familiar phonemes of everyday general American speech. I had to break those boundaries, to search for language sounds that were uncommon and even unpronounceable by most of the general audience."
[Ben Burtt - an excerpt from Galactic Phrase Book & Travel Guide: Beeps, Bleats, Boskas, and Other Common Intergalactic Verbiage]



MM: An "a posteriori" language is any constructed language whose vocabulary is based on existing languages, either as a variation of one language or as a mixture of various languages, unlike "a priori" constructed languages (e.g. Klingon). How did you first search for some exotic languages that would act as inspiration?

PF: I didn’t base Na’vi on any particular human language. In terms of its sound, I thought that the original words Jim Cameron had come up with had a bit of a Polynesian flavor, and I included those sounds in the language. But I added a lot beyond that, so that I don’t believe that Na’vi sounds like any specific existing language.

As I mentioned, there’s nothing in Na’vi that couldn’t be found in some human language—and that’s important, since humans have learned to speak it. However, the particular combination of elements in Na’vi—its sound system, morphology, and syntax—is unique.


Avatar producer Jon Landau to discuss making James Cameron's vision a reality.

And a linguist invented the Na'vi language—did you pick up any?
I have enough trouble with English! I could say a few words in Na'vi, but not much. Na'vi is a hard language. When I knew we had to create a language for the movie, I thought, okay, you go hire someone and say, 'This is the word we have to say.' And they'd come up with the word. I was wrong. Paul Frommer, our linguist, took six months just to define the structure of language, which I thought was fascinating. And after that, he'd start coming up with the sentences that we needed.
Does it have parallels to any language on earth?
I think it's relatively unique. We didn't want someone to hear it and go, 'Wow, that's Watusi!” Or Maori, or French.
[via boxoffice.com]


MM: The upcoming score by James Horner will feature vocals in the Na'vi language. Would you like to describe your experience with the singers during those recordings?

PF: That was a lot of fun! James Cameron had written the lyrics for six songs, four of which I translated into Na’vi. (It was interesting to try to write poetry in the language!) Then at various times I met with music director James Horner, his associates, and the singers who had to sing in Na’vi to help them pronounce the words of the songs. For some of the recording sessions, the music was fluid and developed on the spot, which I found a wonderfully creative process. For one session, though, there was already pre-composed music written out on a musical staff. I’m a pianist and I have a musical background, so I was able to read the music with the singer and help him fit the words to the melody.

Thanks, Paul. Congratulations and continue the good work! 

Monday, November 09, 2009

Soundworks Collection: Exclusive Video Profiles of the Sound World



With the success of the Sound for Film Profiles Series, Bay Area Director Michael Coleman has launched a new website called SoundWorksCollection with an Oscar sound recognition focus. Every two weeks until Oscar night in March 2010, the Soundworks Collection website will release a new sound for film profile. The SoundWorks Collection takes you behind the scenes and straight to the dub stage for a look into audio post-production feature films, video game sound design, and original soundtrack scoring. This exclusive and intimate video series focuses on individuals and teams behind-the-scenes bringing to life some of the worlds most exciting projects.
The SoundWorks Collection is produced by Colemanfilm Media Group in a partnership with MIX Magazine, several audio focused college schools and programs, and the support of the online sound community worldwide.

Here are the Soundworks Collection Categories:

FEATURE FILM VIDEO PROFILE
Who will bring home the sound Oscar this year? The SoundWorks Collection focuses on Hollywood’s feature film releases throughout the year. Watch how the sound crew worked their audio magic as they put the finishing touches on your favorite film and prepared the film for the big screen.

To launch the series Michael has posted the "Watchmen" Sound for Film Profile:



STUDIO TOUR VIDEO PROFILE
Ever wonder what it is like to walk into a world class studio or soundstage? Now you can see the technology and gear that the pros use every day to create their tracks. Get an exclusive tour into some of your favorite studios and see what toys the top dogs are using.

GAME AUDIO VIDEO PROFILE
Game soundtracks are no longer about the beeps and bloops of our childhood past. Explore your favorite game titles sound design from the world’s most innovative sound teams. Playing video games isn’t just about what you see on the screen, it’s about what you hear around you…and in the surrounds.

ORIGINAL SOUNDTRACKS VIDEO PROFILE
Ever wonder who is responsible for making you tear up uncontrollably during your favorite movie scene? It’s likely the brilliant mind of the composer who created that memorable score. Explore the process into today’s leading composers and see what they hear inside their head.
The goal for the SoundWorks Collection is simple; we are dedicated to profiling the greatest and upcoming sound minds from around the world and highlight their contributions.

[SoundWorks Collection on Twitter]
[SoundWorks Collection on Facebook]

Wednesday, November 04, 2009

Saving Private Ryan - Music & Sound

Saving Private Ryan - Music & Sound (part 1 - Music)


Gary Rydstrom (an excerpt from Surround Sound, Second Edition):

Since we hear all around us, while seeing only to the front, sounds have long been used to alert us to danger. In Steven Spielberg's Saving Private Ryan, the battle scenes are shot from the shaky, glancing, and claustrophobic point of view of a soldier on the ground. There are no sweeping vistas, only the chaos of fighting as it is experienced. The sound for this movie, therefore, had to set the full stage of battle, while putting us squarely in the middle of it. I can honestly say that this film could not have been made in the same way if it were not for the possibilities of theatrical surround sound. If sound could not have expressed the scale, orientation, and emotion of a soldier's experience, the camera would have had to show more. Yet it is a point of the movie to show how disorienting the visual experience was. Sound becomes a key storyteller.

Saving Private Ryan - Music & Sound (part 2 - Sound)


The sound memories of veterans are very vivid. We started our work at Skywalker Sound on Saving Private Ryan by recording a vast array of World War II era weapons and vehicles. In order to honor the experiences of the men who fought at Omaha beach and beyond, we wanted to be as accurate as possible. I heard stories such as how the German MG42 machine gun was instantly identifiable by its rapid rate of fire (1,100 rounds a minute, compared to 500 for comparable Allied guns); the soldiers called the MG42 "Hitler's Zipper" in reference to the frightening sound it made as the shots blurred into a steady "zuzz". The American M1 rifle shot eight rounds and then made a unique "ping" as it ejected its empty clip. Bullets made a whiney buzz as they passed close by. The German tanks had no ball bearings and squealed like metal monsters. These and many other sound details make up the aural memories of the soldiers.
Our task was to build the isolated recordings of guns, bullets, artillery, boats, tanks, explosions, and debris into a full out war. Since it isn't handy or wise to record a real war from the center of it, the orchestrated cacophony of war had to be built piece by piece. But, of course, this is what gives us control of a sound track, the careful choosing of what sounds make up the whole. Even within the chaos of a war movie, I believe that articulating the sound effects is vital; too often loudness and density in a track obscure any concept of what is going on. We paid attention to the relative frequencies of effects, and their rhythmic, sequential placement, but also we planned how to use the 6 channels of our mix to spatially separate and further articulate our sounds.
There are many reasons why a sound is placed spatially. Obviously. if a sound source is on screen we pan it to the appropriate speaker, but the vast majority of the sounds in the Saving Private Ryan battles are unseen. This gave us a frightening freedom in building the war around us.

[read more: Sound and Music in 'Saving Private Ryan' - via USC Sound Conscious]

Related Post: An interview with Tomlinson Holman 

Saturday, October 31, 2009

University of Illinois Experimental Music Studios

The University of Illinois Experimental Music Studios were founded in 1958 by Lejaren Hiller and were the among the first of their kind in the western hemisphere. Faculty members and students working in these studios have been responsible for many of the developments in electroacoustic music over the years including the first developments in computer sound synthesis by Lejaren Hiller, the Harmonic Tone Generator by James Beauchamp, expanded gestural computer synthesis by Herbert Brün creation of the Sal-Mar Construction by Salvatore Martirano, and acousmatique sound diffusion/multi-channel sound immersive techniques researched and applied by Scott Wyatt in electroacoustic music and performance. Today the facility continues as an active and productive center for electroacoustic and computer music composition, education and research.


EMS - Experimental Music Studios | 50th Anniversary CD Set
Carla Scaletti: excerpt from Cyclonic (binaural mix of a multichannel version, 2008) [.mp3 - 8:22]

Taking its name from the rotational motion associated with powerful meteorological events, Cyclonic was inspired by the awesome power of the weather in east central Illinois and plays at the edges between events as recorded, events as experienced, events as remembered, and events as imagined.
Pitches were derived from the frequencies in the National Weather Service alert signal, and the concept of a Cycle is abstracted in various ways ranging from an endlessly accelerating pan to endless (cyclic) increases in the pitches of synthetically generated sirens and filterbanks processing synthetic wind.
Apart from rain, thunder, and wind sounds recorded in downtown Champaign, the entire piece was synthesized in Kyma.

[via ems.music.uiuc.edu]


Memories about the Experimental Music Studios by alumna Carla Scaletti, president of Symbolic Sound Corporation and designer of the Kyma language:

I came to Illinois because of a book I found in the Texas Tech University library: Music By Computers, edited by Heinz Von Foerster and James Beauchamp. I had known for several years that I wanted to make music with computers, so when I found this book and noticed that most of the authors were at the University of Illinois; I immediately applied and was accepted into the doctoral program. I arrived a week before classes to take the entrance exams and asked Scott Wyatt if I could help him get the studios ready for the fall semester and was delighted when he put me to work soldering patch cords.

Illinois was an environment where virtually everyone--whether faculty, student, or staff--was actively experimenting and creating software, hardware, and music. Faculty members did not act as mentors but as colleagues who, by actively engaging in their own creative work, served as examples of artists questioning the status quo and postulating alternative solutions. Sal Martirano was just learning to program in C in preparation for his YahaSalMaMac and would hold after-hours seminars on combinatorial pitch theory in his home studio where we read articles by Donald Martino over glasses of wine and freshly sliced watermelon seated right next to the SalMar Construction: one of the earliest examples of a complex system for music composition and digital sound synthesis. Herbert Brün written the beautifully algebraic SAWDUST language and was using it to compose I toLD YOU so. Jim Beauchamp had just finished the PLACOMP hybrid synthesizer, was doing research in time-varying spectral analysis of music tones, (and, contemporaneously with Robert Moog, had built one of the first voltage-controlled analog synthesizers: the Harmonic Tone Generator). Sever Tipei was writing his own stochastic composition software, and John Melby was using FORTRAN to manipulate/generate scores for Music 360 (the predecessor to C Sound). And in an abandoned World-War II radar research loft perched atop the Computer-based Education Research Laboratory (home of PLATO), Lippold Haken and Kurt Hebel were designing their own digital synthesizer (the IMS) that eventually evolved into a microcodable DSP (prior to the advent of the first Motorola 56000 DSP chip). The CERL Sound Group's LIME software was among the first music notation/printing programs; I saw it demonstrated at the annual Engineering Open House and asked if I could use it to print the score for my dissertation piece Lysogeny.

I practically begged Scott Wyatt to let me work as his graduate assistant in the Experimental Music Studios and, thanks to Scott, Nelson Mandrell and I had an opportunity to help build a studio: Studio D (at that time, the Synclavier Studio, now the studio where Kyma is installed), as well as experience the Buchla voltage controlled synthesizer and the joy of cutting & splicing tape. All of these experiences plus my explorations of Music 360, PLACOMP, and the CERL Sound Group's IMS and Platypus microcode, fed into the creation of Kyma. When my adviser John Melby won a Guggenheim award and took a year's leave of absence, I had the opportunity, as a visiting assistant professor, to teach his computer music course and to establish a Friday seminar series on computer music research.

Because the School of Music is part of a world-class university, Illinois afforded me opportunities for study and research that I would not have found elsewhere. It meant that I could play harp in the pit orchestra for an Opera Theatre production of Madame Butterfly and, the next day, run an experiment in Tino Trahiotis' psychoacoustics lab course in my minor, Speech and Hearing Science. It meant that I could go on tour with the New Music Ensemble led by Paul Zonn or David Liptak, that I could study mathematics, that I could do spectral analyses of the harp, that I could also get a degree in computer science and learn Smalltalk from Ralph Johnson after finishing my doctorate in music, and it meant that I could do some of the early work in data sonification with Alan Craig at the National Center for Supercomputing Applications. These experiences, along with the computer science courses in abstract data structures, computer languages, automata, and discrete mathematics, also fed into Kyma.

For me, Illinois was the perfect environment for exploration, and my work with Kyma is a direct outgrowth of those experiences as well as a continuation of several threads of interest that can be traced back to my graduate work at the University of Illinois.

While I was a graduate assistant, my office mates were Chuck Mason and Paul Koonce; the day that Chuck defended his dissertation and accepted a position in Birmingham, he taped a piece of notebook paper to the wall of our office with the heading "Famous Inhabitants of this Office" followed by all three of our names. I remember this act of optimism with great fondness, and I've heard that the list is still in the office (and that it has grown a lot longer by now).

Carla Scaletti, DMA in composition, University of Illinios
President, Symbolic Sound Corporation

Related Post: Herbert Brün

In the Master's Shadow: Hitchcock's Legacy

Alfred Hitchcock is one of the few filmmakers in history who may accurately be referred to as a cinema virtuoso. In short, he revolutionized the making of suspense films, playing the audience like a violin through the use of a well-timed sound effect… an almost unnoticed visual detail… the masterly construction of a suspense sequence. This documentary features filmmakers such as Martin Scorsese, William Friedkin, Guillermo del Toro, John Carpenter and Eli Roth on Hitchcock’s influence, why his movies continue to thrill audiences, and the ways in which his cinematic techniques have been embraced by directors to this day. Lavishly illustrated with film clips from throughout Hitchcock’s storied career, this film celebrates the enduring legacy of the man many consider the greatest filmmaker the medium has yet produced.





"I think he's certainly one of the great directors of all time and did something really hard to do, which made for a distinctive style that is hard to copy. People try to do Hitchcockian things. It's very hard. There's something distinctive about his personality. This quirky, dark sense of humor with a sense of romance and a sense of just devious, dark view of humanity, and you put it together into something pretty unique." - Gary Rydstrom


Video: In the Master's Shadow (1/3)
Video: In the Master's Shadow (2/3)
Video: In the Master's Shadow (3/3)

Sunday, October 11, 2009

Focus on Kyma International Sound Symposium - KISS09

These have been busy days here at NIU, especially for the speakers who showed to all the symposiast each method of working with Kyma.
A lot of participants came from different sides of the world, it was great to be immersed in this colourful community! Carla Scaletti and Kurt Hebel (the creators of Kyma) introduced their new hardware Paca(rana) - you can find a lot of information here @ SSC - and the latest features implemented in Kyma X.70 (where 'X' stands for the last letter in 'six' - 6.70).



Carla and Kurt did a wonderful job to develop a state-of-the-art "recombinant" workstation, which gives any user the freedom to explore unknown sonic territories without hitting the DSPs (there's plenty of headroom in processing power).
During the morning of the first day, Carla showed us how to control Kyma with external devices like the Wacom Tablet and the Continuum Fingerboard, then with Nintendo Wiimote+Nunchuck and SpaceNavigator (via OSCulator by Camille Troillard).
The AC Toolbox presentation by Hector Bravo-Benard, revealed to us an algorithmic approach to sound composition with Kyma. By using the LISP language, it is possible to structure an entire composition and have Kyma play it. It is possible to make use of strict algorithmic procedures like probability distributions, Markov chains, tendency masks, and many other techniques used by composers of the likes of Paul Berg, Gottfried Michael Koenig.
A special mention should be made for Bruno Liberda's lecture, where he explores subtle connections between his wonderful graphical scores and different kinds of ancient neumatic notation, making symbolic notation and symbolic sound computation converge.
Cristian Vogel explained his latest creation called Black Swan for swiss choreographer Gilles Jobin. The 50 minutes original Kyma score was composed in Geneva and at the electroacoustic studio of the Musique Inventives d'Annecy. Here's a Swiss TV documentary about the creation of Black Swan.

What is Kyma?

Kyma is a sound design language that permits to manipulate, transform and warp any kind of sound source in realtime, but we already know all of this.
During these days in Barcelona - at the first Kyma International Sound Symposium - we have learnt something more specific and, maybe, unexpected.
As Carla Scaletti said during the opening lecture of KISS09, "Recombinance Makes Us Human". So, what does this mean? Actually each of the participants brought their own experience, point of view, modus operandi and creative talent to the symposium. This multiform epiphany allowed everyone involved to share the same time and space for 2 days, exchanging their ideas like sharing bits of code in a recombinant manner.
The symposiasts all came from different fields where Kyma plays a fundamental role in each of their activities (sound designers, composers, DJs, etc.). Each of them knows and likes different specific aspects of Kyma.

The lectures performed during these 2 days taught us something: regardless of their own personal skills, and of their own degree of Kyma knowledge, observing things from other users' point of view reveals unexpected surprises and creates new connections, on a mental and operational level, which were never considered before.
Even sounds commonly used in our everyday activity, if put in a different context (artistically or technologically), appear to us as new and unexpected, bringing back to our memory John Cage's famous concept: "(You have) to look at a Coca-Cola bottle without the feeling that you've ever seen one before, as though you were looking at it for the very first time. That's what I'd like to find with sounds -- to play them and hear them as if you've never heard them before."

The musical performances have contributed themselves to recombine new pieces of code, bearing different morphologies, sintax and structures.
With her liveset "SlipStick for Continuum and Kyma", Carla Scaletti explores the sound phase-space, in order to find new timber paths moving from materic aggregation to resonant chimes and back, without forgoing a "virtuoso gestuality" given by the nature of the instrument itself.
We have experienced the possibility of creating vibrating soundtracks for silent movies in real time, thanks to the contribution of Franz Danksagmüller's work, with the help of Berit Barfred Jensen's voice.
Concrete sounds produced by the friction of a Cello bow on different-shaped polystyrene objects, saturate the listening space thanks to the work of Hector Bravo-Benard, reminding us of some textures of Xenakis' work "La legende d'Eer".
Marin Vrbica's "Tiger in the Jungle" immerses us into an exotic and rich spatial dimension made of revolving sound-surfaces and sliding glissando-like multiple trajectories.

In the end, we are surprised that the paradigma presented in the beginning by Carla Scaletti (Recombinance Makes Us Human), has actually acted as an invisible force guiding, transforming, rearranging our thoughts, knowledge and purposes.
So, after the symposium we observe ourselves in a different way. We have changed into new intellectual human beings, and this gave everyone of us a fresh new start, in order to discover new possible configurations and interactions between ourselves and the others, between what we are and what we will be until #KISS10.

We really enjoyed our first attempt to follow a live event via Twitter, we hope you appreciated our efforts to make you feel like you were in Barcellona with us, as part of the symposium.
You can track all of our tweets via search.twitter.com.

Here's a selection of related tweets of interests:
  • Now live #KISS09: Carla Scaletti - "Recombinance Makes Us Human" - Philosophy of Kyma #
  • a Question: Infinity makes us HUMANS? #
  • Infinity+ self-awareness of finitude #
  • Open-ended possibilities= open-ended amount of time, but the learning is fun! #
  • Kyma sounds are recombinant...like us..! #
  • Pen Morph Gandalf to Saruman "You are tracking the footsteps of two young Hobbits" #
  • Writing in time and space..... #
  • I became operational ath the C.E.R.L. lab in Urbana, Illinois on the 12th of November, 1986. -- Kyma #
  • Kyma was inspired by three ideas: cutting & splicing audio tape, voltage control, symbolic logic & computer programming #
  • An interview with Carla Scaletti @symbolicsound usoproject.com/C... #
  • and now...linear interpolation of presets - smooth transition in the phase space... #
  • music is something like a side effect of explorating timbre and sound configurations... #
  • Presets as aural Keyframes #
  • ...the problem of finding graphic symbols for the transposition of the composer's thought into sound." Edgard Varèse #
  • ...without friction [...] there would be no music!"- Carla Scaletti #
For anyone interested in the Kyma sound design language, the book "Kyma X Revealed" is available here @ SSC. This document gives you an overview of the environment, gives you a strategy for approaching this deep system one layer at a time and then walks you step-by-step through each level of Kyma like your own personal tour guide. At the back of this guide, you’ll find several summaries and quick references that you can keep handy for refreshing your memory while working. This book covers the mechanics of how to use Kyma — not the art and science of sound design. However, if you love sound and are hungry for more knowledge on the subject, achieving fluency with Kyma is an excellent first step!


A great thanks to Cristian Vogel (Station 55 Productions) and Symbolic Sound Corporation for their greeting and organization.

UPDATE: Slides from Recombinance Makes Us Human (.pdf), the welcoming address for the First International Kyma Symposium in Barcelona, October 2009.

Wednesday, October 07, 2009

An interview with Christian Zanesi, pt. 2

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo
(Continued from Page 1)


USO: Did mass-culture and consumerism create a lack of “listening attention”?

CZ: I think the ear is intact, it is the demand that has to be discovered and awaken. Even if there is a mass culture, to establish there is a dominant thought in hearing. But the border is very narrow, between the dominant thought and the adventure, we can cross this border very quickly, we only have to open ourselves and to create bridges. Also here there are very serious issues, for example the dominant idea of rock - let's take this example - has built a peculiar "ear" and we can recuperate this ear by using experimental musics. So the world is not made of cases separated by inviolable borders, on the contrary, there is no interruption of continuity between sounds; I am very attentive about popular music, because I know that there are cases of recycling, of experimentation, and also a genre which can be experimented. We have to be attentive, we have above all to avoid being dogmatic, fundamentalist or categorical. On the contrary, we have to observe what happens, try to detect in every practice, in every thing something different from the variable design of music, the variable design of humans and see how this can be transcended.



USO: What has been the role of physical space in the sound projection practice in GRM history?

CZ: In the 1950s, when Schaeffer invents this music, musique concrète with his collaborators, they suddenly asked themselves how to let everybody listen to this music in a public space, in a concert hall. Schaeffer said, if we have to go to the Carnegie Hall, how do we do, we cannot use a single loudspeaker on the scene, this was a little ridiculous, and finally the dimension of the concert, the dimension of how to present this music to the public was created, little by little; there is also the cinema industry which plays an extremely important role, since in the 1960s the Dolby firm had already imagined the surround, due to the invention of the Cinemascope, and were forced to position many loudspeakers to occupy the screen, and many more surround loudspeakers all around. So, we have at the same time the desire of people to experience sound in a bigger dimension, and the technology which built that ear. Today, we follow this process, and people is happy to come to a concert, because we think that a concert is going to be a particular and privileged moment, maybe a show, and that there is a difference between experiencing music in a concert hall and at home, that there is something really spectacular in that moment which justifies the concert. So there are many ways to face the problem, we chose one, but there are others.



USO: Can you describe your compositional process? How do you go from the idea to the sound, to the elaboration, to the structure, to the shape?

CZ: This is a personal question, if you ask this same question to another composer, the answers are different. Me, I don't go from idea to sound, but the opposite, I go from sound to idea; that is I need to find the sound that touches me for different reasons - if I had the time I would explain these reasons and give examples, but I don't - and inside this sound there is a potential, and the idea born from the sound is not a sound subjected to an idea, it is the opposite. We can consider the sound like some kind of fugue, which means that there is potential for development, and to work I need to find a sound which will be at the base because there is an emotive shock, there is really something imposing, which will be the source of my work, and at that moment we can listen to the sound in depth, not for what it is, but for what it can become; we hear the sound as a promising element of a musical structure. But this is a personal answer.


USO: Do you work with stereophonic material and perform live spazialization or do you prefer to design spatial trajectories/position in the studio?

CZ: 

I mainly work in stereo, because my studio is not multiphonic, but I have a lot of experience with sound projection in the auditoriums, with a stereo source you can give the illusion of a multiphonic work; I both do live concerts and acousmatic works. Finally you don't know what system we have, where it is, the space, and also the kind of concert we have been requested, but what we experience nowadays is that live practices and acousmatic practices are coexisting pacifically, and they enhance each other depending on the place, time, and project. The concrete music was born in 1950s, and it is about, after all these years, to create a new branch which will be the performance version, and the live version of this music. This evolution started in the year 2000, since today's systems (computers) are much faster than 20 years ago, and the tools for sound processing allow it. But these have developed because there was a strong desire to develop them. There are no technologies which arrive spontaneously, we look back into history, we observe, and what happens sometimes with dreams, there are dreams and conjunctions which make things possible. That's what it is going to happen in about 10 years.


USO: What are your working tools for non-linear editing and sound signal processing?

CZ: At the GRM there are more or less all programs - Pro Tools, Digital Performer - but there is a kind of consent on the tools to use, very quickly we can comment that today's programs are actually a direct-line consequence of the techniques from the 1950s, because with a tape or a disk we could go from a sound to another, create discontinuity, being able to superpose sounds, and create a verticality, and harmony; we have the mastery of time, thus on counterpoint and polyphonie, we could increase or decrease the volume; we need to check that there are the minimal conditions for the program to be exploitable, and finally, checking on today's programs; we have continuity. With the program, whatever that is, we can do anything - montage, superposition, regulate the volume, regulate the space and finally you do the same as it were a tape recorder. These are fundamental operations for the music's world.



[Prev | 1 | 2 ]

Special Thanks:
Laure de Lestrange
Christian Zanesi
www.ina.fr/grm

[Listen to Magnetic Landscapes - Christian Zanesi | via deezer.com]

Related Posts: Daniel Teruggi (GRM, Paris): The novelty of concrete music.

An interview with Christian Zanési, pt. 1

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo


Christian Zanési is a French composer and the head of the musical programming at the Groupe de Recherches Musicales. Zanési studied music at Université de Pau, then in Paris at the CNSMDP under Pierre Schaeffer and Guy Reibel. Since 1977, he is a member of GRM especially in charge of the production of radio programmes for France Musique and France Culture. He has been recently awarded the Special Qwartz at the Qwartz Electronic Music Awards (Edition 5).
We met Christian Zanési at 104 for the fifth Présences Électronique Festival, organized in co-production with Radio France, which explores the link between the concrete music of Schaeffer and new experiments in electronic music.
Since its founding in 1958, GRM has been a unique place for creation, research and conservation in the fields of electroacoustic music and recorded sound. The systematic analysis of the sound material Schaeffer proposed to his collaborators resulted in the publication of a seven hundred pages reference book, the "Treaty of Musical Objects" published in 1966.
Michel Chion’s Guide To Sound Objects (PDF, English translation) is a very useful introduction to this voluminous work (via ears.dmu.ac.uk).
In this very complete essay, Pierre Schaeffer develops the main part of his new musical theory of the sound object. It is based on two principal ideas, making and listening, and explores the first two levels of this theory: typo-morphology and classification of sounds. He uses various disciplines such as acoustics, semiotics and cognitive science to demonstrate his explanation of sound and, in particular, the musical object within musique concrète.


U.S.O. Project: You often mention that sound is the prime matter. How can we “orient” and recognize sounds we’ve already heard, without the cause-effect relationship?

Christian Zanési: The sound has to be considered in a musical context, it is not about recognizing the sound, it is an expressive matter, and every composer, every musician has some kind of personal sound, a signature sound, so when he transcribes it into his work, into an artistic project, the idea is not to try and play a guessing game, this is this sound, or this is that sound, there is a higher level that belongs to it, and this level is that of the musical relations, expressive relations. When you listen to a concert, for example for cello and orchestra, you are not pointing out each instant "this is a cello sound", you listen to music. And it is the same with sound.
 So, there is a second level which is a little more complex, it is true that some sounds reveal an image, from time to time there could be ambiguity, and this is also part of a rethoric, or of an ancient model, and it is possible that this provokes a kind of curiosity, and at the same time gets mixed into the expression. Composers are very strange.


USO: In the first 50 years of electronic music, has the absence of visual elements during performances limited or enhanced the listening experience?

CZ: Well, naturally us, at our festival Presences Electroniques, prefer to bring attention on the sound, sometimes there are some audiovisual experiences, but the nature, the physiology of it, makes the image a priority on the sound. So we, since we developed a projection system very very sophisticated, prefer that kind of listening experience, it is more complete, we don't need anything else, it is an orientation, it is one of the strengths of our festival; that is, to offer artists the best possible sound projection system, the best definition, and we think that is complete. We are very music-oriented, and I've often been there and I've seen musicians doing an audio-video performance, and I've been rarely convinced, sometimes yes, but it becomes something else, we go down to another level, and we want to explore the purely musical universe, sound and its programming.



USO: Can you give us a more in-depth definition of the concept of “l’ecoute reduit” (reduced listening) by Pierre Shaeffer?

CZ: The first time that... well in a few words it is difficult... when we recorded on vinyls, there were no tape recorders in the 1950s, when we reached the end of a disk, the last groove was repeated indefinitely, and it was a full modulation. In all vinyls today, there is a closed groove with a silence, to avoid that the stylus exits the turntable. All the disks at that time - all that were recorded in the 1950 - the disc, equivalent of today's vinyls, ended on a closed groove, and Schaeffer listened to this phenomenon, everybody could hear it, there was not a silence. The fact that the sound repeats itself, made him understand that finally we were listening to the sound itself, we wouldn't say "this sound is close or not, it starts this way, it is more or less acute, it is more or less brilliant, it is more or less thick, and in that moment he imagined that we could listen inside the recorded sound, which means we could verify, validate, objectivize the phenomenon by hearing the peculiar characteristics of the sound, how it starts and how it ends, in which tessitura it is, if it is light or if it is dark, it is loud, it is close or far. This is to objectivize the listen, this is "L'ecoute reduit".



USO: Can you discuss your vision of the sound organization principles? Definition of the sound object, objective and subjective aspects, articulation, iteration, quality of the information, musical language.

CZ: In classical music, the constituent elements of an opera function organically, that is there are consequences; when you put a sound together with another, this creates an harmony, for example a chord, and if you add another sound the chord completely changes in timbre, this is an organic principle. We can do the same with the sound the composer chooses, so to imagine music sometimes as a vertical relation, or as an horizontal relation; this is an organic system, and the organic system is a biological system, in some way copied from the living, and since we function following the living principles, we are particularly able and specialized to recognize an organic system, and in that moment something starts in our brains and there is some kind of a communion; this is what happens, how to organize the sound, why put this with that, and why after this, add that, they are organic relations.



USO: What do you think of hidden causality (between what the audience sees and hears)?

CZ: The ability to listen is a very complex phenomenon, so when a composer works, it is difficult to be conscious about everything, an artist works very much in an intuitive manner, intuition is a super-computer which gives us the answer immediately. But if we talk about the genesis, about why we made some choices, it takes an eternity. So, the relations, the sound is not inside a single "ear", there is a very ancient ear, for example, that is the ear we use to cross the street, it is the hearing of danger. Hearing is a prolongation of the sense of touch, imagine to be in the forest, when a predator hunts, if it touches you it is too late, you are dead. So hearing is a prolongation of touch, it allows you to foresee the arrival of a predator. There are various hearings - one which detects aesthetic information, one that detects information on the behavior of others, so when you work with sound you alternatively activate many "ears". When you begin a piece of music by having a sound turn around you, you awaken the ancient, the primitive "ear", the danger hearing, since in nature, something that turns around you it is something that tries to catch you, and a composer in some way is conscious of this mechanism and plays with it. It is very difficult to answer this question because the ear is very complex, and the reality around us is very complex. But we are all naturally gifted at hearing, since without it we could not cross the street, we could not hear something behind us, we could not see around us through sound.


[ 1 | 2 | Next ]

Tuesday, October 06, 2009

#KISS09: Kyma International Sound Symposium

Here's the hashtag to easily find and aggregate tweets related to the upcoming Kyma International Sound Symposium: #KISS09.
A hashtag is just a short character string preceded by a hash sign (#). You can find hashtags of interest via Twitter's own search function: see search.twitter.com.
You can choose to subscribe to the RSS feed for your favourite tagged Twitter updates, such as those that have been tagged with #KISS09. This means that you don't have to be a Twitter user to follow the conversation — it's visible to anyone.
That will send any new #KISS09-tagged updates from Twitter to your favourite news reader (e.g. Google Reader).

[RSS feed of usoproject's tweets]

Photos of the event will be available via Flickr (tagged with KISS09).

[RSS feed of the Flickr photoset]

[Facebook group for Kyma X users - by Cristian Vogel]
















Little by little, one travels far.
-- J. R. R. Tolkien

Friday, September 25, 2009

'hist whist' by Marco Stroppa

Composer and computer scientist, Marco Stroppa is an artist for whom musical invention is inseparable from the exploration of new scientific and technological arenas. His series of works for solo instrument and chamber electronics offers the opportunity to explore novel methods for projecting sound in the concert hall and renew the computer paradigms that regulate the relationship between the worlds of instrumental and synthesized sounds.

The seventh short film in the "Images of a Work" series endeavors to understand his work through excerpts of rehearsals and interviews with the artists in the IRCAM studios.

[Click on the image above to see the video, French only -via Ircam]

Saturday, September 19, 2009

Generative Music: an interview with Peter Chilvers

by Matteo Milani, U.S.O. Project, September 2009

Generative music is a term popularized by Brian Eno to describe music that is ever-different and changing, and that is created by a system (Wikipedia). I recently had the chance to interview musician and software designer Peter Chilvers, who created the new iPhone/iPod touch application called Air (© Opal Ltd).
Based on concepts developed by Brian Eno, with whom Chilvers created Bloom, Air assembles vocal (by Sandra O'Neill) and piano samples into a beautiful, still and ever changing composition, which is always familiar, but never the same.

Air features four ‘Conduct’ modes, which let the user control the composition by tapping different areas on the display, and three ‘Listen’ modes, which provide a choice of arrangement. For those fortunate enough to have access to multiple iPhones and speakers, an option has been provided to spread the composition over several players.
"Air is like Music for Airports made endless, which is how I always wanted it to be." - Brian Eno


“About 20 years ago or more I became interested in processes that could produce music which you hadn’t specifically designed. The earliest example of that is wind chimes. If you make a set of wind chimes, you define the envelope within which the music can happen, but you don’t precisely define the way the music works out over time. It’s a way of making music that’s not completely deterministic.” - Brian Eno

[via apple.com]


Matteo Milani: Thanks for your time. Peter Chilvers as a musician first: few words about 'A Marble Calm' project.

Peter Chilvers: I happened across the phrase 'A Marble Calm' on holiday a few years ago, thought it sounded like an interesting band name, then started thinking about the type of band that might be. The more I thought about it, the more it seemed to tie up a number of ideas that were interesting to me: drifting textural ambient pieces, improvisation and song. By making it a loose collective, it's enabled me to bring in other vocalists and musicians I've enjoyed working with on other projects - vocalists Sandra O'Neill (who also worked with me on 'Air' for the iPhone) and Tim Bowness, marimba player Jon Hart and flautist Theo Travis.


MM: When did you start working with generative music?

PC: In the 90's I worked as a software developer on the 'Creatures' series of games. When we started on Creatures 2, I was given the opportunity to take over the whole soundtrack. The game wasn't remotely linear - you spent arbitrary amounts of time in different locations around an artificial world, so I wanted to create a soundtrack that acted more as a landscape. I ended developing a set of 'virtual improvisers', constantly generating an ambient soundscape in the background - it was quite involved actually, with its own simple programming language, although little of that was visible to the user.

[...] Peter chose to use his background in improvised music to create an array of "virtual musicians" that would play along to the action on screen. Each composition in Creatures contains a set of "players", each with their own set of instructions for responding to the mood of the norns on screen.

Peter was able to generate much more interesting effects using recorded instruments rather than using General MIDI sounds generated by a soundcard, which can often be quite restrictive. This meant that he could take advantage of the many different ways that a note on a "live" instrument can be played - for example, on a guitar the sound changes greatly depending on the part of the finger used to strike a string, and on a piano when one note is played, all the other strings vibrate too. Also by altering the stereo effects, he could fatten the sound at certain times.

He also made use of feedback loops within the soundtrack. Feedback loops were first experimented with in the 1970s - if any of you can remember Brian Eno, you may be interested to know he composed most of his music then using this method. The idea is that you play a track and record it into RAM (onto a tape back in the 1970s). After about a short while (around 8 seconds in Creatures 2), the loop starts and the original sounds are played back so the composer carries on creating sounds in response to what's gone before.

Behind the scenes, scripts control the music engine and set the volume, panning and interval between notes as the mood and threat changes.

[via gamewaredevelopment.co.uk
]


MM: Why did you choose the Apple platform to develop the applications?

PC: I've been a huge fan of Apple products for a long time, and their timing in releasing the iPhone couldn't have been better. Bloom actually existed in some form before the iPhone SDK was announced - possibly before even the iPhone itself was announced. From the second we tried running the prototype, it was obvious that it really suited a touch screen. And Apple provided one!

The difficulty developers have faced with generative music to date has been the platform. Generative music typically requires a computer, and it's just not that enjoyable to sit at a computer and listen to music. The iPhone changed that - it was portable, powerful and designed to play music.


MM: Who designed the visualizations of Bloom? Eno himself?


PC: It was something of a two way process. I came up with the effect of circles expanding and disappearing as part of a technology experiment - Brian saw it and stopped me making it more complex! Much of the iPhone development has worked that way - one of us would suggest something and the other would filter it, and this process repeats until we end up with something neither of us imagined. Trope, our new iPhone application went through a huge number of iterations, both sonically and visually before we were happy with it.


MM: What kind of algorithms define Bloom's musical structure? Are they specifically based on Brian's requests or just an abstraction based on his previous works?

PC: Again, this is something that went back and forth between us a number of times. As you can see, anything you play is repeated back at you after a delay. But the length of that delay varies in subtle, but complex ways, and keeps the music interesting and eccentric. It's actually deliberately 'wrong' - you can't play exactly in time with something you've already played, and a few people have mistaken this for a bug. Actually, it was a bug at one point - but Brian liked the effect, and we ended up emphasising it. "Honour they error as a hidden intention" is something of a recurring theme in Brian's work.
A forthcoming update to Bloom adds two new 'operation modes', one of which was designed specifically to work with the way Brian prefers playing Bloom.


MM: Does the graphic and audio engine include audio and video standard libraries or you wrote your own classes?

PC: I've built up my own sound engine, which I'm constantly refining and use across all the applications. It went through several fairly substantial rewrites before I found something reliable and reusable.


MM: Is all the code in 'Objective C' or did you use any external application?

PC: It's all Objective-C. I hadn't used the language before, although I'd worked extensively in C++ in the past. It's an odd language to get used to, but I really like it now.


MM: Is Bloom sample based? What is music engine actually controlling (e.g. triggering, volume, panning, effects)? What about the algorithmic side of the music engine?

PC: Bloom is entirely sample based. Brian has a huge library of sounds he's created, which I was curating while we were working on the Spore soundtrack and other projects. It's funny, but the ones I picked were just the first I came across that I thought would suit Bloom. We later went through a large number of alternatives, but those remained the best choices.

The version of Bloom that's currently live uses fixed stereo samples, but an update we're releasing soon applies some panning to the sounds depending on the position of each 'bloom' on screen. It's a subtle effect, but it works rather well.


MM: Would you like to describe your actual and next projects?

PC: I've been involved in two new applications for the iPhone: Trope and Air. Both Apps were intended to be released simultaneously. Trope is my second collaboration with Brian Eno, and takes some of the ideas from Bloom in a slightly different, slightly darker direction. Instead of tapping on the screen, you trace shapes and produce constantly evolving abstract soundscapes.

Air is a collaboration with Irish vocalist Sandra O'Neill, and is quite different to Bloom. It's a generative work centred around Sandra's vocal textures and a slowly changing image. It draws heavily on techniques that Brian has evolved over his many years working on ambient music and installations, as well as a number of the generative ideas we've developed more recently.

I have just had some interesting news: Trope has been approved, it's now available in the App Store!

More information can be found at www.generativemusic.com.

"Trope is a different emotional experience - more introspective, more atmospheric. It shows that generative music, as one of the newest forms of sonema, can draw on a broad palette of moods." Brian Eno
[Brian Eno discussing Generative Music at the Imagination Conference, 1996]

UPDATE: Trope in action!


"[...] I had realised three or four years ago that I wasn't going to be able to do generative music properly – in the sense of giving people generative music systems that they could use themselves – without involving computers. And it kind of stymied me: I hate things on computers and I hate the idea that people have to sit there with a mouse to get a piece of music to work. So then when the iPhone came out I thought: oh good, it's a computer that people carry in their pockets and use their fingers on, so suddenly that was interesting again." - Brian Eno
[via timeoutsydney.com.au]

Related Post: Deep Green: sound design for iPhone App

Thursday, September 17, 2009

Ben Burtt to Receive the 'Charles S. Swartz Award'

[The winners of Academy Awards for Best Sound Effects Editing: Ben Burtt and Charles L. Campbell with Jamie Lee Curtis, Carl Weathers -1983]

The Hollywood Post Alliance has announced that Ben Burtt will receive the organization’s Charles S. Swartz Award for Outstanding Contribution in the Field of Post Production, recognizing his powerful artistic impact on the industry. The award will be bestowed on Mr. Burtt on November 12th during the Hollywood Post Alliance Awards gala at the Skirball Center in Los Angeles.

HPA Awards co-founder and committee chair Carolyn Giardina said, “We are thrilled to recognize Ben Burtt with the Charles S. Swartz Award, an honor that represents everything that the HPA stands for; creativity, technical excellence, and limitless thinking. From R2 D2 to WALL-E, Ben Burtt has helped create some of the most unforgettable characters of our generation. We are honored to present this award to him.”

The Charles S. Swartz Award was created to honor individuals who have made outstanding contributions to the field of post production; an industry in a state of an expanding creative palette and of dynamic transition as a result of digital technologies and societal changes. The award was named in honor of the late Charles Swartz, who led the Entertainment Technology Center at the University of Southern California from 2002 until 2006 and helped to build it into the industry’s premiere test bed for new digital cinema technologies. In addition to a long and successful career as producer, educator and consultant, Mr. Swartz served on the Board of Directors of the HPA. Leon Silverman, President of the HPA noted that “Ben Burtt’s career and accomplishments speak to the true spirit of this award, which recognizes impactful contributions. Ben Burtt’s impact to the art and craft of post production and to our cultural legacy should be celebrated. ”