Though The Sounds of Star Wars is a gold mine for Star Wars buffs, it's designed to appeal to more casual audiences as well.
"This book is meant to appeal to anybody who's interested in cinema or sound," Rinzler says.
The idea came from an outfit, Becker and Mayer, that had just published a birdsong book.
"They came to us at LucasFilm and said, 'You know, we think the technology is good enough so we could do a Sounds of Star Wars book,'" Rinzler says. "I contacted Ben, who was, I think, justifiably skeptical."
Burtt was afraid the explosions wouldn't sound right, but he came around eventually. When they decided to go ahead and do it, he suggested the book include an earphone jack.
"So kids can take the book to bed and get under the covers with their headphones and their parents won't know," he says, laughing.
Through study and experimentation Walt Disney and his engineers have found that by introducing music or various sounds and noise frequencies into the cartoon, the response of the audience is varied and controlled. By combining noises of certain pitches or tempos the psychological values of the cartoon music is emphasized in keeping with the story requirements.
Stories are told by sound. Vance DeBar "Pinto" Colvig, who wrote the lyrics for “The Three Little Pigs” and who does many of the sound “imitations” for automobiles, airplanes, or machines when they assume human characteristics, is able to convey a whole story by sound. For example, he caricatured a steam roller at work by suitable noises, pops, puffs, razzes, and wheezes. Vocally, without the aid of mechanical devices, he depicted a narrative episode of a very busy steam roller that worked hard, then got tired and stopped. To do this he made a picture on paper of the sound by working out the suitable suggestive sounds and inflections which he set down on music paper according to the desired effects, tonal range, and tempo which brought to life the pen and ink steam shovel. “Pinto” Colvig with the aid of a trombone and vocal sounds can make an airplane do all sorts of antics. Real airplane sounds cannot be controlled to musical tempo for cartoon effects. Each sound in a cartoon film is the result of much thought. Such things as bugs, all sizes, getting the hiccoughs which happened in “Mickey’s Garden” when Mickey sprayed them with “bug eliminator” required hours of rehearsal before the sound was recorded. For some sounds hours are required in rehearsal, and fifty or sixty hours are required to make the sound for the average cartoon.
While much of the sound is made vocally, mechanical devices and various materials are also used. One kind of slow-burning fire noise is made by crinkling cellophane, while a more crackling fire sound is gained by twisting a bundle of bamboo strips. A train getting under way is obtained by a tin can in which is a handful of gravel. By shaking the can and gravel up and down, the noise of a real railroad train is created. Another “train noise maker” consists of a number of wires held at one end by hand. The other end of the wires is rubbed over a sheet of corrugated tin. Thunder claps of various kinds are obtained by cowhides and sheets of metal, while wind noises are made by rapidly revolving a wheel with wire spokes. Another hollow ghostly wind noise is made by revolving a wooden drum against taut silk. Rain noises are made in a drum in which are stretched piano wires. Particles of glass dropping against the wire when the drum is revolved creates the sound.
Many devices are specially constructed in order that certain sound frequencies are dampened so the “imitator” has a better control over the pitch and volume of the noise. Special resonators in the pianos, doors set in elaborate resonating jams so “door-closing” noises may be made for cartoon purposes, and many other constructions are necessary.
Walt Disney, who knows the dramatic value of various sounds and musical tempos, has constructed many novel devices in order to produce the necessary sounds.
The background orchestral music is valuable in directing the emotional responses of the audience, and is used to elaborate on the cartoon story by what Walt Disney prefers to call the “earical illusion.”
Music and sound has become an exact science with Walt Disney. The music and sound effects in his cartoon psychologically tell the film story for the ears as the picture on the screen does for the eyes.
A Movie made of algorithmically generated "inactive spaces” is projected on a screen.
An overlapped stream of pre-recorded “sound activities” is then diffused from a record player and from 4 different iPods running in shuffle mode, creating recombinant “invisible actions” to fit into the Movie.
A self organizing link between sound and visuals is established via cybernetic procedures defined as interconnected spin networks, produced by a video camera “observing” the movie and by one microphone “listening” to the space placed inside the performance Locus.
The Kyma sound design environment (accelerated by the Pacarana sound computation engine) is then engaged in order to compute the data and perform real-time evaluations between the different types of numerical information (audio-video), producing a “sonorous response” to the asynchronous stream of audio-visual contents.
The synthesized information is then diffused in the performance space again through 4 loudspeakers.
Various types of feedback will take place during this highly dynamic process implying a self regulating behavior that will establish new connections between the pacing of the movie locations and the “sonorous” content produced by the processing of the iPod sound streams.
The Observer will then experience the following layers of information:
- a Real-Time recombinant Movie made of “inactive” locations.
- an Overlapped Stream of “possible actions” diffused by the iPods that fits into the Movie.
- a Sonorous link between the above domains of activities via 4 full range loudspeakers.
The Observer can take into account one or more layers of information (even all of them) in order to create himself a cinematic experience via a correlation process.
International festival of music, mixed media and experimental electronic art
[a project by AntiTesi (Domenico Sciajno) and U.S.O. Project in collaboration with O' and Die Schachtel]
Live!iXem is a well established event that explores the relationship among sound, music, electronic arts and new technologies. iXem is a project conceived and produced since 2003 by AntiTesi, an organization founded and directed by Domenico Sciajno based in Palermo. Center for self-documentation, research and interdisciplinary development for new forms of Arts, Antitesi focuses on contemporary arts in its diverse expressions, coordinates and organizes cultural events, festivals and exhibitions in music and art.
Festival Live!iXem in October 2010 is divided into two events, a 'preview' in Turin on October 8th, organized with Hiroshima Mon Amour, and the main festival in Milan including live performances, installations and a workshop, organized in conjunction with O' and Die Schachtel.
The field is transverse. The common denominator is not the genre but the experimentation and research to innovation, helping to explore the variety of approaches to aesthetic guidelines, tools and technological solutions adopted by selected international artists and involved in the contemporary music scene.
After the introductory event in Turin, October 8th - with Live! IXem Preview - the festival appointment in Milan will take place in the Isola area, between the spaces of O', Medionauta and VisualContainer on Friday 29th and Saturday 30th October, from morning until late at night, offering a busy schedule of events.
Among these, the workshop 'Movement of acoustic images' - whose theme is the field recording, its transformations and its reproduction in acousmatic space - will consist of three different phases: the first by a technical approach, the second one by listening to environmental sound sources chosen as points of a hypothetical ideal sound map and the third one with 'on-site' recording of sound events.
October 29th, 30th | Milan @ Live!iXem 2010 - Edition VII International festival of music, mixed media and experimental electronic art
Aims of laboratory
Explore through the knowledge of digital recording techniques the universe sound belonging to the "places";
Discuss some aspects of the use of specific material in electronic and instrumental composition.
Localization of sound in the environment focuses on the theme "sound mapping" as a social and cultural identity and, at the same time, an expression of conscience and personal feelings, whereas its use in music opens important interdisciplinary horizons.
The workshop's aim is to provide an introduction to theory and practice of the main mobile sound recording techniques and the use of sound sources in different fields, from sound design to cinema, from digital editing to the use of digital sources in instrumental composition, electronic music and live electronics. Music compositions and musical excerpts from field recordings will also be introduced, listened and discussed. All members are invited to participate in the soundwalk along the neighborhood "Isola" in Milan and thus contribute during the workshop to the creation of a small sound map of the place. The pieces obtained at the end of the workshop will be diffused through a multi-channel audio speakers setup and then available to the public in the form of soundscape composition.
To participate in the workshop you must register (spaces are limited) no later than 23/10/2010 and pay the fee (€ 60.00).
For more information about the workshop and registration, please write to workshop [at] antitesi [dot] org or visit the Facebook and Twitter page.
All subjects and activities of the workshop/laboratory are addressed at the introductory and preliminary level, combining theory and practice with some helpful audiovisual material. Members are encouraged to get involved actively, having discussion and bringing in their experience. Participants will be guided, if weather conditions permit, in a short sound-geo path through the "Isola" neighborhood in Milan and invited to contribute to the creation of a work of the soundwalk. It does not require any special technical knowledge and/or music. However, the knowledge of digital audio recording/editing basics can be an advantage.
The workshop is devoted for those sound designers, musicians, composers, sound engineers and experts who would consolidate and compare their experience in an environment of exchange and sharing, to all the "listening lovers" and those who have interest and curiosity about soundscape and electronic compositions.
There are no special requirements. The workshop is open to all without any age limit. Entries remain open until all available seats have been booked.
Mac/PC laptop (useful but not essential during the editing and sound processing sessions)
owners of a portable recorder are asked to take it for the soundwalk in the neighborhood "Isola"
Roland Italy will provide participants two digital recorders - R-09HR and R-05
O' | residences | photography | sound | performance
non-profit association | via Pastrengo 12, 20159 Milan (Italy)
tel +39 02 6682 3357
O 'is a non profit organization for the promotion of artistic research, founded in May 2001 by Sara Serighelli and Angelo Colombo as O'artoteca (O' since 2008). Its activity is divided into a large exhibition space, an area of consultation and archive, and an outside lab, L.A.B.-LaboratorioArtibovisa for the production related to photography and printing. It developes exhibition projects and discussions, performances, concerts, lectures, publications; it's a place where artists can experiment, test and compete with their work.
"The RAI Studio of Musical Phonology is the outcome of the matching between music and the possible new means of analyzing and processing that sound has" - Luciano Berio
After more than fifty years since the birth of analog magnetic recording, on the 17th of September 2008 at the Castello Sforzesco's Museum of Music Instruments in Milan, took place the inauguration of a new space dedicated to Rai Studio of Musical Phonology - "musical instrument of the 20th Century, extension of human thought". Such an event was made possible thanks to the International Music Festival MITO, in collaboration with the Civic Museum of Musical Instruments and RAI.
Maddalena Novati, RAI Radiophonic Production's musical consultant and responsible for the Phonology archive - thanks to the decisive contribution from Doctor Massimo Ferrario, Director of RAI TV Production Centre (Milan) - was able to transfer all the Studio equipment from Rai Turin to Milan headquarters. This is the very first plan of recovery, storing and refurbishing electrophonic musical instruments.
Maddalena Novati does in fact describe this niche of the Museum as the "20th Century lute shop". The idea of conceiving this space as a sole instrument in its whole, is moving: there are so many experiences enclosed in those devices that it is actually still possible to perceive the residual energy that characterized the entire handcraft process of sound-writing. The Milan Institute of Musical Phonology, designed by physicist Alfredo Lietti, was created in June 1955 at the RAI headquarter in Corso Sempione 27, by Luciano Berio and Bruno Maderna. During that year, Milan was on the verge of becoming a pivotal point in the international electroacoustic music post-war scene, through a new expressive language, which was a synthesis of the concrete and electronic experiences happening in Europe at the Studio für Elektronische Musik (WDR) in Cologne and at the Groupe de Recherches Musicales (GRM) in Paris. Among the electronic music experimental productions at the Studio of Phonology, we must mention works such as Visage by Luciano Berio, Notturno by Bruno Maderna, Fontana Mix by John Cage and Omaggio ad Emilio Vedova, the only entirely electronic work created by Luigi Nono.
"...in the opaque Milan of the '50s, Berio and Maderna found a hostile, apathetic environment, while opening the Phonology Studio. In a completely different situation from the newborn Studio in Cologne, the two masters, built their ideas on the strong basis of their French experiences, through a different technical method which was free and imaginative. Creatively, it has been the most relevant experience in the whole Old continent…" - Giacomo Manzoni
I always close my eyes and try to jump backwards in time, inside that Studio, imagining the noises, dialogues and sounds coming out the loudspeakers: living and breathing the miracles happening in that far age. We now have the possibility to touch by hand what I always used to see only in photographs and videos, how many protagonists of that time have lived (eminent musicians like Luigi Nono, Giacomo Manzoni, Aldo Clementi, Henri Pousseur, John Cage). In fact, in room XXXVI of the Museum of Music instruments, thanks to a glass structure designed by the architect Michele De Lucchi, the back of the eight famous frames containing the circuits are open for everyone to enjoy, allowing spectators to get the heart of the analog technologies with a 360° vision. Based on original pictures and videos from the time, the atmosphere of those years has been recreated.
Further information about the sounds that characterized the second half of last century, are available to the public of devotees and researchers, thanks to four computer stations with multimedia applications, and a digital library with photographs, footage, sound examples and scores (cured by the LIM Laboratory of the Università Statale di Milano).
The Studio, a patrimony fundamental to understand electroacoustic music writing, in the beginning has been experienced by composers as a mean to emancipate themselves from traditional instruments, with its 9 oscillators, the noise generators, different modulators, filters and the Tempophon (a device with rotating heads that allowed to vary the duration of the playback of a previously recorded sound, while maintaining the original pitch).
Those were the times of technicians in white lab coats, yet one particular person changed this professional's profile: Marino Zuccheri. Born on the 28th of February 1923, he was hired by EIAR in 1942; in the following year he left his job because of the war, but was re-hired a few years later by the new-founded RAI.
"... I like remembering Marino in his Phonology Studio, master among masters, master of sound among masters of music, because sound for him did not have any secrets, since he was trained in auditoriums while working for the Radio together with the most famous directors of the time. He would always recall how he begun working in Phonology by chance, but it is certain that it wasn't because of chance that he continued during the years, considering he's been the only holder of the Studio from when it was created (1955) until it closed down (1983)." - Giovanni Belletti, "Marino Zuccheri in Fonologia", 2008
He did not have any obligation to give advice, contributions or suggestions, yet musicians would follow his instructions on how to realize musical compositions: without him, much of last century's music would have never been born.
"... All the protagonists of Neue Musik passed through the Studio, and it is fair to say this: many of them were in Milan through scholarships, and had to present a final composition at the end of their term, and sometimes the stay had not been long enough to master the nine oscillators' secrets, so the great Marino Zuccheri would put together an acceptable composition with a few touches, thus many of electronic music "incunabula" are his works, and not of those who signed them." - Umberto Eco, La Repubblica, 29 ottobre 2008
That has been an amazing adventure for many years, until 1983 to be precise, year of his retirement (Zuccheri then passed away in Milan, 10th of March 2005).
"Marino Zuccheri's demise was a great loss, not only for what he meant for Contemporary Music, but moreover for what he still could have done: he was to be involved in an important project by RAI, in order to catalog tapes (as himself defined the work) that would have given us a fundamental technical, artistic, musical and cultural insight on the history of the Phonology adventure (another of his own definitions), but not to restore the sound itself (which could be done by others when needed), rather to revive the ideas and technical intuitions that made possible the creation of that sound: Marino (along with the composers) was the only one that could help us!" - Giovanni Belletti, "Marino Zuccheri in Fonologia", 2008
"[..] Two of the first electronic works in my record collection - Berio's Visage from 1961, and John Cage's Fontana Mix from 1958 - were created there with Zuccheri. Even today, both of these pieces sound impressively vivid and dynamic, and what we should now recognise is that such qualities should be attributed to the technician as much as to the composer. [..] Zuccheri appears to fit the profile: Parete 1967, composed for painter Emilio Vedova for the Italian Pavilion at Montreal Expo, 1967, was his only known work. [..] Luigi Nono was his first choice as composer, but Nono's schedule prevented that, so Zuccheri stepped in to assemble a 30 minute continuous work using previously recorded sounds built up from long intersecting tape loops. Zuccheri's modest opinion of himself was that he was no composer. Certainly there's very little sense of form in Parete 1967, but the dramatic contrasts of harsh noise, perhaps sourced from piano strings and struck metal, and shifting, modulating drones suggestive of vocal choruses, have something in common with the ritualistic side of Iannis Xenakis, or the best horror movie soundtracks. To the regret of his label Die Schachtel, who have produced another of their sumptuous limited edition vinyl releases here, Zuccheri died before seeing the publication of his only record." - David Toop, The Wire, 2008
After the Studio closed down, the equipments were disassembled and exhibited in Venice for a short period, on the occasion of the temporary exhibition called Nuova Atlantide, organized in 1986 by the Biennale (with the collaboration of Roberto Doati and Alvise Vidolin) and in Milan for I piaceri della città - Iconografia delle sensazioni urbane in 2001, where Il risveglio di una città was displayed through music thanks to the futurist composition of the same name by Luigi Russolo, and to Ritratto di città, the first electroacoustic composition of the Fifties (voice and tapes by Luciano Berio and Bruno Maderna, lyrics by Roberto Leydi) created in order to convince the direction of RAI to create the Studio.
At the end of the Eighties, there still was no awareness of what the Phonology Studio had meant historically, at the point that all the documentations had been deposited (packed and cataloged) in a storage room at the RAI Museo della Radio in Turin, together with all sorts of disused equipments such as video-cameras, tape recorders, record players, microphones, with no plans to be restored or rebuilt.
Thanks to Maddalena Novati's interest, in 1996 the Studio devices were displayed in Turin's Music Salon and in 2003 they were brought back to RAI in Milan and located in a room on the fifth floor which is adjacent to the one where the Phonology Studio originally was. On June 20, 2008 they were officially transferred to the Castello Sforzesco.
It was a great pleasure for me to interview Maddalena Novati to talk about what the unique experience of the Milan Institute of Musical Phonology was and meant, and to understand what consequences and developments this process of recovery could lead to.
Matteo Milani: Maddalena Novati, why in your opinion the Studio became a myth?
Maddalena Novati: In 1955, to own 9 oscillators tuned on different frequencies, as opposed to the single one that was in Cologne, was like having a "whole orchestra" at your disposal, that could generate many sounds simultaneously like in a chord. This way, you already had a handful, a palette of sounds that killed production times. Human ear was the only judge which decided whether a sound was good or not, and after several attempts and mistakes, interesting tapes would be recorded and stored and this process would continue until a result was achieved. Berio would broadcast on radio each new composition from other Studios around Europe in order to spread the repertoire. The 11 TV episodes of C'è Musica e Musica (1972) on the history and ways of making music, sound fascinating with explanations by Luciano (Berio, n.d.a.). He was a great teacher on top of being a great composer, a person that could communicate and bring music to large audiences.
Berio developed an intense activity as a teacher in the United States and Europe, offering courses of composition in Tanglewood (1960 and '62), in Dartington Summer School (1961 and '62) in Mills College in California (1962 and '63), in Darmstadt, Cologne, Harvard University and, from 1965 to '72, in Juilliard School of Music in New York. From 1974 to '79 he collaborated with IRCAM in Paris. Berio's "Un ricordo al futuro - Lezioni americane" published by Einaudi is a beautiful book, which collects the lectures on aesthetics he gave in the US.
MM: When and how did Phonology's decadence period began?
MN: After the Sixties, at the beginning of the Seventies, radio is not anymore the core of research (not being an experimental mean anymore). Computers came up, and research centers moved elsewhere: large calculators are owned by universities and musicians depend on the physics and science departments. The computing machines had to function day and night, and only when physicists would leave their workplaces, musicians could take their place at night in order to perform calculations for their compositions. The radio was not involved anymore, as the broadcasting media did not yet use computers and their respective courses were different. Due to the lack of updates to its equipments and technology, the Phonology Studio was less and less attended by composers. Moreover the defection of some big names (Berio moved to the IRCAM of Paris, Luigi Nono to Fribourg, Maderna passed away prematurely in 1973) had an influence on an inevitable decline.
It’s again Maddalena Novati who tells us that today the archives (original master copies) of the Phonology Studio, hold 391 ¼ inch-audio tapes with one or two tracks, as well as one inch tapes with four tracks, plus 232 digital copies from acquisitions (copies of works coming from other studios, centers for electronic music, recordings or concerts or plays of the main authors and interpreters who frequented the Studio during its active years). Since 1995 the custody of tapes continues to be her primary objective as well as their cataloging and digitalization in cooperation with Casa Ricordi (the oldest and most important Italian editor), together with the central Nastroteca of RAI and Mirage Laboratory of the Gorizia University.
No real decay of the tapes has occurred, thanks to the good quality of the tapes from BASF, that were selected instead of other brands like Scotch or Agfa. Survival of the audio documents is possible only by separating them from their physical support, and periodically transferring them to new supports. The second digital transfer at 96 khz/24 bit is currently ongoing at the Centro di Produzione TV and Produzione Radiofonia of Rai Milano, in cooperation with Mirage laboratory.
As for the restoration, the most common task is the elimination of noise of the analog tapes and, according to the tape conditions and the specific nature of the music, there are other interventions to be decided on a case by case basis. As a start, one has to perform an initial search, in order to detect the possible existence of other copies of the same music, as it would then be possible to exploit the best parts from each copy, and with suitable techniques, improve the effectiveness of the cleaning work.
In many works of electronic music, the original tape contains scratches and joints glued with adhesive tape whose glue loses its adhesive properties after a few years. Therefore it often happens that one has to paste various pieces of tape before copying it on a new support. On some tapes, it happens that there are some detached magnetic fragments, while on others, parts of material or protective film might have melted, and therefore deposited on the sound head thus blocking the progress of the tape. On top of these mechanical problems, there might occur electromagnetic problems or problems due to the type of equalization used during recording, and the fact that the oldest tapes could only support a proportionally lesser quantity of magnetization.
Reconstructing the laboratories
As I already had the chance to point out, very often the production of electronic music is not linked to a specific instrument like it happens with traditional music, but rather to a whole of equipments commonly called a system. Hence the conservation of a single element of a system does not give a full testimony of the modus operandi of a musician in a given period. Without doubt, the most effective solution is reconstructing a lab where one can reproduce all the phases of the production process of a musical work. In Cologne for instance, they reconstructed and brought back into function a lab for electronic music with the same configuration it had in the Fifties. In a similar way, this is what is happening at the Aja museum for the study of Sonology Institute of Utrecht University in the Sixties. In Paris at the Parc de la Villette, they are setting up a large section dedicated to the musical electrophonic instruments, until the experiences of computer music in real time of the Eighties.
[Vidolin A., "Conservazione e restauro dei beni musicali elettronici", in Le fonti musicali in Italia - Studi e Ricerche, CIDIM, year 6, pp. 151-168, 1992]
[the italian version of the article is also available @Digimag 55]
[listen: Ricordo di Marino Zuccheri | ITA - via Radio3 Suite]
While reading this article by Paul Liberatore (via Marin History Museum), we discovered that our hero Ben Burtt just finished editing "Red Tails", the upcoming movie about the Tuskegee Airmen - the African-American pilots who fought in World War II (produced by Lucasfilm). And he's about to start sound work on the first live-action film by Andrew Stanton, "John Carter of Mars".
Ben will join his daughter EmmaBurtt for the premiere of Adventures in Time and Space, the latest chapter of Emma's Time Machine, a 90-minute documentary on the history of Marin County. Narrated by Emma, the film also takes a look back at the early days of Lucasfilm in San Anselmo and includes clips of the Marin County locations used for motion pictures, from the days of silent movies until Indiana Jones.
"This film is really the result of a lot of little stories," Ben Burtt said. "I decided that we'd try to connect them all together in a kind of walking tour idea that juxtaposes the past and the present. It's nostalgia. I've always had a great interest in finding some way to connect with the past."
Sound Researchers and Practitioners Convene to Discuss 'symbolic sounds' at #KISS10
Can music and sound-effects be symbolic?
Sound designers, composers, sound artists, film makers, researchers and others sharing an intense interest in sound will be convening in Vienna (Austria) from the 24th through 26th of September to discuss this and other, similarly controversial, issues at the annual Kyma International Sound Symposium (#KISS10).
Lively discussions, electronic/computer music performances, interactive installations, demos, workshops, and paper sessions will be held in the newly renovated Casino Baumgarten, a spectacular ballroom built in 1890, and the Rhiz Bar Modern, a showcase for experimental media in Vienna.
[view from the Stage at Casino Baumgarten]
Join in the free on-line discussion
Anyone having an intense interest in sound is invited to join in the free on-line discussion at pphilosophyofsound.ning.com. Sign in and introduce yourself by sharing your earliest sonic memory. Discussions on sound, symbol, and meaning have already begun and will continue during and after the symposium. You can participate on line whether or not you are able to attend the symposium in Vienna.
To register for #KISS10:
The registration deadline for participating in #KISS10 is 13 September 2010. The participation fee of € 90 (€ 40 for students) includes the 3-day symposium, morning/afternoon refreshments, and 2 evening concerts.
The Kyma International Sound Symposium (KISS) is an annual conclave of current and potential Kyma practitioners who come together to learn, to share, to meet, to discuss, and to enjoy a lively exchange of ideas, sounds, and music! Kyma is a sound design environment created by Symbolic Sound Corporation. This year, the KISS symposium is being organized by Wiener Klangwerkstatt in cooperation with Symbolic Sound and with the support of Analog Audio Association Austria, Preiser Records, and the Austrian Federal Ministry of Science and Research.
The works of Agostino Di Scipio include compositions for instrumentalists and electronics and sound installations. Some of these explore non-conventional approaches on the generation and trasmission of sound, including a special focus on phenomena of noise, turbulence and emergence. Other works implement dynamical networks of live sonic interactions between performers, machines, and environments (e.g. his Audible Ecosystemics project).
FP: Let's talk about your early works, before the 'ecosystemic' paradigm. How different was it from your more recent work?
AdS: Well, I had a very early phase when I gathered as much knowledge as possible about computer music techniques and digital signal processing. That included a special for algorithmic composition, too, the question being how I could formalize musical gestures of use in writing for either usual music instruments or electronics. In retrospect, I view that time as one of broad explorations in sonic materials, which eventually took me, later on, to focus on granular, textural and noisy materials, of a kind I later described as "sound dusts". It took me, in short, to micro-composition, i.e. to focus on the finest temporal scales in sound - with various degrees of densities and consistencies among sonic grains or particles. The idea was that micro-composition would let macro-level, gestural properties emerge at larger time scales. I tried to determine a process in sound in a way that lower-level processes would bring forth larger sonic gestures.
Along this path, I even developed new synthesis techniques based on the mathematics of nonlinear dynamic systems, as found in the so-called 'chaos theory' - that was end of the 1980s and early 1990s, and chaos (or better: the mathematics of nonlinear dynamical systems) was not as popular as it later became. I came across it and studied it quite in depth for some time.
Now, all those efforts usually provided me with sound materials for studio works. But at some point I felt a need to find my own way into live electronics performance. I had stayed far removed from that, because I was completely dissatisfied with how live electronics, including real-time interactive computer music, was approached at the time. Or, at least, I didn't want to follow those paths...
FP: What made you unhappy with extant approaches?
AdS: It was mainly because of the obvious linearity in the unfolding of time. And the fact that peculiar electroacoustic possibilities and artifacts - for example Larsen tones (feedback sounds) - were exclusively understood as a problem in audio engineering, stranger to the wanted sounding results. I felt they could rather be taken as the very resources, to be controlled and exploited in a truly live electroacoustic situation. I was dissatisfied with the usual notion that the technology was there to 'neutrally' represent and convey musical signals, as if the tools and their hydiosincracies were not part of the experience; instead of pretending to set them aside, I felt they could be studied and turned into sonic resources, the medium itself of experience. Then, and maybe more importantly, I realized that the processes I was dealing with in a formalized manner, were abstract models rid of any surrounding space, separate from any source of noise and risk in their own unfolding. They were models, as it was, not the thing itself. That confined me to performance as representation, as the (imperfect) replication of an ideal. I realized I could use the electroacoustic equipment and computers to implement dynamical processes, to make real nonlinear systems, exposed to ambience noise and the hydiosincracies of the electroaoustics, exposed to these sources of uncertainty and change. No more software modelling of abstract dynamical systems, but the implementation of a self-regulating sounding system in contact with the surrounding environment. In a way, this was a move from models of existing, and usually extra-musical, systems or processes, to the design of a kind of living sound organism in contact with the surrounding space, one that grasp in that space the energy necessary to stabilize itself, grow and change.
So, that was how I moved on towards my more recent work. Some compositions are a significant testimony of this journey. For example, take the first string quartet (5 difference-sensitive circular interactions, 1997-98): it already had a strong relationship with the surrounding space, although clearly the instrumental material still had a predominant role. Another work, Texture-Multiple (started 1993), for small ensemble and electronics, represented for me a kind of a long-lasting workshop (it didn't know a final version until recently); born of a sketch for a smaller scale work (Kairos, with saxophone and electronics, 1992), it soon 'conquered' the 'collective', the ensemble dimension (3-6 instruments), and then, with the mediation of real time computer processing, it took contact with the surrounding space (since 1994). At each new performance of that work, I would try ideas concerning the interactions between human performance, machinery, and space, that would later become central to the 'ecosystemic' pieces.
FP: The two works you mentioned, call for a kind of multiple-level interaction: on the one hand, we have a written score which remains quite flexible in terms of how one plays the notated materials; on the other hand, there is a programmable DSP unit transforming all instrumental sounds; and furthermore, we have the room, whose resonances to the music somehow drive the computer processing that in turn affects the interactions among instrumentalists. So all components, one way or another, are constantly transforming each-other, in a rather flexible or elastic temporal and spatial dimension.
AdS: Yes. Let's pick Texture-Multiple as an example. The instrumental material is notated in separate, independent instrumental parts. These are similar among themselves (not identical, each is tied to the specifics of the particular instrument), so actually all instrumentalists involved play the 'same' thing, in slightly different and flexible manners, seldomly in synch. When the performance starts, the instrumentalists are not really an 'ensemble', they are separate individuals, no sense of community. The role of electronics consists in bringing to their attention, as they play, the higher or lower degree of communion of their independent intents. The computer alters their sound, to different degrees, depending on features of the instrumental gestures. In turn, depending on the sonorities the computer gives them back, the instrumentalists may get to know better whether they are acting together or not, and accordingly change their playing, following simple interaction rules. So gradually a sense of 'ensemble' takes shape throughout the piece. The unity of the members is not given for granted, is not pre-determined, it comes forth as they actually play, with the mediation of the electronics. I must add that, to some extent, the computer processing is also driven by specific features in the total sound in the room space; 'space', here, is an ineliminable liaise of the relations among the involved players. For some people, the compositional process in Texture-Mulitple is reminiscent of Christian Wolff's music. However, different from Wolff, the electronics intervenes to alter the instrumental sound, making a larger sound texture dynamically depending on the peculiar resonance of the surrounding space, thus emphasizing the active, or pro-active role of the room acoustics in the gathering of the ensemble community. An important implication, that is not found in Wolff music, is that, for the good or the bad, here human relationships are profoundly mediated by the technology. (Which is what happens in our daily life, nowadays).
FP: If I remember correctly, there is a 'attraction point', a high F-sharp, and the closer the instrumentalist get to that pitch, the less their sound is subject to electronic transformations.
AdS: Yes, that's a fair approximation. The F-sharp is quite frequent in the six instrumental parts, so it fills the resonant space. If it grows too much, and the room acoustics reinforce it, the comptuer will 'avoid' getting more of it. That way, a bit of information that is relevant to the piece, doesn't saturate (intended symbolically and musically, not signal-wise) the surrounding space.
FP: I remember well, it regulates the level of the signal sent to the recording buffer.
AdS: Indeed, a more general idea I work with in that piece is: the louder the instrumental material resonating in the room - or, the denser and quicker the instrumentalists' gesture - the more heavily processed, 'granulated' ('spliced-up' in tiny bits) and finally 'evaporated' and thus attenuated, the sound flow from the computer; until the moment where grain density is so small that there is sound no more. The score binds the overall process to an overriding direction, so I know that sooner or later there will be the sense of 'communion' or 'common intent' that we were mentioning. And I know that sooner or later the ensemble members will play loud enough to inhibit any further electronic processing and reduce the computer sound to silence. To some extent, in that work the score notation predetermines an overall narrative: when a sense of ensemble is finally achieved, the total ensemble sound may be strong enough as to silence the computer, but that very event negates the mediation they were leaning on. However, one can't really say how the network of interactions will eventually develop during the performance, but one can be sure it will come to achieve the goal. Like in other works of mine, a rather open interaction network operates under the spell of a higher-level force or guide (in this case, stipulated by the composer himself in his notation).
FP: And how is the musical writing conceived of, in such a case? Is it linear? Does it develops in time? Do you read it from left to right?
AdS: In Texture-Multiple you do, yes. Except for local loops, repeats whose variable duration depends on local intentions on the part of each of performer. In large sections of the string quartet score, you read from left to right, as you say; at a certain point, though, the four guys settle on material that has to be reiterated (again a loop-based notation, but on much larger spans). At that point, their playing techniques vary depending on what they hear from the electronics - and what they hear is an articulated texture arising from the computer processing of their own sound. The four members have special rules of behavior: if they, on subjective basis, hear the computer processed sound as a rather dense and continuous texture, then they have to gradually slow their tempo down and decrease the sound level, until they come to silence, making the playing gesture but touching none the strings. Viceversa, if they hear a sparser texture, they keep playing, adding more material, eventually decreasing in level, but speeding up the tempo (that will provide the computer with more materials to work with, reinforcing the rather foggy or dusty texture). The idea is, local behaviors compensate for the activity of the system, 'system' being understood in its whole entirety, including human beings, space, electroacoustic apparatus. Simple local mechanisms may become a quite intricate net at a global level. It gets really difficult to hear out the specific agency driving the performance - who affects whom, who has the lead and who is led. You could speak of an 'emerging agency'. In cybernetics and complexity theory, one speaks of distributed causality. The performance system is no more decomposable into independent parts, it becomes a holon - using an awful term put forth by some scientists.
FP: Going back to sound, we could say that, with these works, we move from the usual situation where sound is raw material transformed or projected in space and time, to a peculiar situation where sound takes on an informational, communicative value, maybe more essential, and yet often subtle and feeble, as it eventually gets to thin noise events that verge on background noise, while at the same time affecting further gestures and developments in both the human and the machine components.
AdS: I agree with that. Sound sets the conditions of its own existence and development. Which actually brings us to the works in the Audible Ecosystemics project (2002-2005), where probably the approach is more fully crafted. The project includes a variety of concert piececs and sound installations. More recent works are like further extensions of that project (e.g. a series of works named Modes of Interference). Sometimes I think that the words Audible Ecosystemics refer more to a way or attitude of making sound art, and less to a set of pieces. Anyway, we are talking of live electronic solo works, i.e. performed with specific live electronics set-ups, no music instrument involved. They explore sound materials existing in the given room, such as background noise, in different ways. Or Larsen tones (i.e. the accumulation of ambience noise mediated by the room acoustics and the electroacoustics utilized) deliberately caused by the 'electronic performer', i.e. the person or people in charge of preparing the equipment and tweaking the overall audio infrastructure. In the Background Noise Study (Audible Ecosystemics n.3a) you start from this 'nothing musical' (background noise) and make something out of it. If this 'something' is interesting and keeps your attention, then it can be defined as music. However, that chance is never granted beforehand: the performer does his/her best, particularly in the rehearsals, in order to establish a sufficiently varied system dynamics, the crucial rerequisite for something of interest to happen; yet, the variables are so numerous - the audience walks in after reheasals, so the room acoustics change; there might be some accidental sound events in the room, or from outside, that were not there before… you know, all such marginal circumstances can modify how the performance turns out in the end. Now, strictly speaking that is not a problem. My directions as a composer, and the performer's own skills, cope with such circumstances and strive to ensure that some music eventually emerges thanks to such accidents. In fact all that is there prompted to make that happen is purely potential, the performance itself has to turn it into actuality, and that is only possible by feeding the process with some little energy, however musically insignificant.
I do have some works of a rather different kind. Take the Book of Flute Dynamics (also known in Italian as Per la meccanica dei flauti). It is not based on the kind of sonic inter-connections we have mentioned between the electroacoustics, the musical instruments and the room acoustics. And still, it is another an example of how you can work with the usually unwanted, in the particular case, with several small noises a flutist makes never blowing into the flute, simply holding the instrument in her hands and lowering the keys, etc. Again, attention is turned to residual, with hardly-noticeable noise, understood as artifact traces left by human interaction with a piece of (mechanical) technology, the instrument, and work with that. A bit like in the Background Noise Study in the Vocal Tract (Audible Ecosystemics n.3b), the method may be different, yet the purpose is obviously of a similar kind: in the flute work the ‘space’ is a small tube of varying length, manipulated with a finite set of keys and fingers; in the Audible Ecosystemics works, on the other hand, the physical space, itself mechanically and culturally connotated, consists in the room environment where we set to present the work. In the just mentioned Background Noise Study in the Vocal Tract, the idea is to experience the resonances and unwanted noises of a smaller but changing room (a mouth) and the resonances and unwanted noise in a larger room (concert hall).
FP: How do you experiment and gradually finalize an 'audible ecosystem'?
AdS: Things often are born out of trial and errors. It can last quite a long time. Say, I start with an idea about how sound should originate and develop, setting the minimum requirements for some sound to be there instead of silence.
Based on that, I slowly shape up a process that, beside bringing forth some sound, articulates the thus generated sound in time. This requires extensive experimentation. I assemble a small-scale set-up in my own studio, smaller than the one eventually to be set in a concert hall or performance space, and live with it for months, trying it, listening, refining, testing it under different conditions, technical and environmental: during the day and the night, with open or closed windows, more or less cars or else in the distance, voices or birds in the street, the plane passing by, the telephone ringing, the neighbors cheers, etc. And then different microphones and microphone placements, maybe soldering some speaker or piezo, and certainly refining the software, etc. Up to a point where I can see that the process works (sustains itself) and changes (generates significantly varied sound textures and patterns). Being satisfied with it, doesn’t mean that the whole thing works in a musically, aesthetically rewarding way: it simply means that it shows the ability to self-regulate for some time in an autonomous way, such that, by feeding it a little noise, it can behave in a non-destructive way.
Now, that’s empirical evidence of what I mean saying that music is never there before you make it, or that music doesn’t exist until it emerges to existence. It may sound as a philosophical statement about music, a statement from a nonobjectivisit and constructivistic perspective, yet it's very practical, too. As a general criterion, I’m happy with the process that I set-up when, as a result of the inherent system dynamics, it unfolds through as many as possible system states: perceptually that means that you have a variety of textures of changing density, with several degrees of tactility to them, with internal timbre variations, changing across frequency regions, variations in micro-rhythmical (granular, random or patterned) activity. Things are probably at the most clear when listening to a performance of Feedback Study (Audible Ecosystemics n.2a). Let me explain. I don’t usually pay too much attention to musical pitch and pitch structures. But in Feedback Study I can hardly avoid a sense that pitch is important, because the only sound generating device is there a, is a feedback loop causing Larsen tones, often coming with a clear pitch quality. Their frequency (or frequencies, as sometimes they come in clusters) depend on roon acoustics and the mic-to-speaker distance (as well as on the mic and speaker characteristics in electro-acoustic transduction). Therefore, one of the things in this work is how the system dynamics allows for developing different harmonic fields based on the Larsen tones: the variety and the redundancy in frequency regions and pitch relationships project the system dynamics in the dimension of pitch, making it audible to the ear.
At a different level of discourse, the general idea behind such works is that the identity of a work is captured in the array of determinate relationships and composed interactions, including the connection between microphones and loudspeakers, their placement in the room, how the software itself works and what role the performers take on as they handle the gear, etc. At the same time, the potential to express this identity lies in the variety of random stimuli coming during the performance from the hall, and other, often random, particulars of the available electroacoustic equipment. My understanding and appreciation of the results is less grounded in aesthetic evaluation, and more in the sense of a convergence or coming together of room, people involved, and the whole equipment (hardware and software).
I don't mean that the aesthetics of the sounding results is of little significance to me. Yet, the focus of experience is about this vergeance, this coming together of all too often reputed independent components. The network of interactions I devise is usually enough open to the surronding environment as to change in time, wander and develop; at the same time, it is closed-onto-itself enough to preserve its identity, retaining its structure notwithstanding the random events in the ambience. It gives something to the ambience, and it gets something from the ambience, in a truly structural coupling. Action and perception: two faces of the same coin, tossed in a determinate environment, where some random events happen. This mutual exchange is what we seek, it’s what we hopefully obtain during the performance, the richness. The goal is to provide an experience where everything is connected to every other thing, in sound. Nothing, in a given space, is foreing to sound and hearing. The ear knows that nothing is disconnected, that nothing is neutral to what it hears.
FP: Turning to technology, how relevant is for you the computer programming environment, its flexibility, its peculiarity? And what role does it play?
AdS: Well, first it is important to stress that "technology" here is more than the computer and the software involved. As should be clear, critical is the array of transducers, be them loudspeakers, membrane microphones, piezos, or other sensors (I am fond of the accelerometers I was allowed to work with, two years ago in Berlin, for Untitled 2008 - Soundinstallation in two or more dismantled or abandoned rooms). Not to forget the mixer console, which I tend to consider as a performing device. Essentially, all analog gear included is really crucial, so possibly one has to consider it part of one's own designs. And I am not necessarily referring to the quality of high-end professional equipment: even lousy speakers and cheap microphones can do a good job, if used in sensible and informed ways. What is important is your awareness of the role and function you assign to them as components in a larger infrastructure.
Now, as far as software is concerned, I use real-time DSP programming environments that allow me to develop a variety of automated functions. "Automated" is not the same as "predetermined", it means "able to extract information from the signal and to turn this data into variable control signals". I make a distinction between what is usually named "audio signal processing" and what I often refer to as "control signal processing". Sometimes my computer patches are more voluminous and complicated in their control signal processing subpatches than in the audio signal processing ones. Sound processing and trasformations can be kept rather simple and can still yield quite interesting results, if driven and articulated by properly shaped, "adaptive" control signals generated in real time.
In this regard, I find that Kyma is a truly remarkable computer workstation, that I have been using for 15 years now (fifteen!). It is extremely powerful, and the programming environment is efficient for both rapid prototyping, and for deeper programming technicalities. I also work with PD. In my "scores" (instruction booklets?), I usually document all necessary signal processing in a machine-independent notation, partly graphic, partly verbal, eventually referring the reader to well-known digital signal processing technicalities. That allows other people to re-create the algorithms using programming languages and computer systems they prefer. Baed on such documentation, works of mine have been performed by colleagues who work with software I tend to avoid (like Max/MSP). From what I can hear, the results are rather consistent with my own performances.
FP: It seems that you have a modular approach on preparing your codes, avoiding redundancy, and creating meaningful connections among extracted signal features, or their psychoacoustic equivalents.
AdS: I spend quite some time with designing feature-extraction algorithms. I have an ‘arsenal’ of them. Also important is the array of mapping functions from extracted data to control signals to apply in ways consistent with their perceptual reality. I change or refine these software modules, tuning them depending on context. I mean not only compositional or musical context, but also physical context. Suppose we have the computer track, during a performance, the most resonant frequency in the total room sound (via microphones). Suppose we use this data to attenuate that very frequency in the computer output signal routed to the speakers, thus compensating between input magnitude and output. One aim for that could be to avoid or limit strong feedback peaks. Now, if you go like that in a 10mt x 10mt room, the code you come up with may not be working in much smaller rooms. In the installation Stanze Private (ecosystemic sound construction), the rooms I work with are glass bottles and vessels, with volume ranging in the few squared centimeters. In which case, the trackers necessary to establish the inverse amplitude relationship I was describing, have to be tuned to work in a very peculiar way, their reaction time must be much shorter just because of mere physical dimensions and reflective properties of the room surfaces. The general criterion is the same (inverse i/o relationship), but it doesn't work regardless of dimension. The timing of the ‘followers’ or 'trackers' must be properly studied. Therefore, in general, at each new project I may re-cycle tools I have already developed, but depending on many factors, specific extensions, implementations or refinements are also necessary.
FP: Let’s go back to the performance paradigm: it is clear that your music - or your work more generally – lives on symbiotic relationships between the performance space, the sound events produced in it and the people that establish a connection with that space. That implies a social dimension, taht is essential to make the experience ‘sensible and REAL’. Nowadays, we live in a historical timeframe where the experience of the real is often replaced by a paradigm of the simulacrum, an extreme, sometimes violent virtualization of life, where the real space, the physical space, is depicted almost as a social problem. People prefer to interact through virtual social networks, more often than in the ‘real world’. This being so, I wonder: in a society where experience is becoming more and more virtual (literally reduced to purely binary information), what will happen of your works, which need a real venue and environment in order to exist?
AdS: I am myself interested in research dealing with how we perceive space and all that is around us in space. As I see it, that is done in two ways: either in observing what happens in the very moment of lived experiences, here-and-now; or via simulation tasks and technologies, pinning on as many aspects of perception as possible among those having a role (biological, biocybernetic, ecological aspects). The latter approach, pursued among the tidy walls of research labs, leads to virtual reality. Thanks to it, it is possible to expand our knowledge of what is and what isn’t relevant to the process of human perception. Now, in my view, it is important not to misunderstand the data we gather from the scientific approach for unique and unambiguous representations of reality. They only add a bit of rationalistic analysis of what it means to be living beings. It may be great to be able to synthesize virtual spaces and use this technology to ‘travel’ to inexistent spaces, or anyway to spaces other than the space where the living body is. But, again, that's only good in order to gain a bit of knowledge on the organism’s behavior. Once we have that knowledge, we must turn to real life, and see how it can be a relevant part of lived experience. A ‘fake’ life, a virtual dimension may be ok for entertainment purposes - with entertainment mass-media, we always shift from perceiving the content of human communications, to perceiving the medium presumed to communicate that content. That is only interesting if you strive to appropriate the medium, or even to design the medium. Otherwise it is of little interest and even verges on the totalitaristic, when sold as the predonominant or exclusive manner of human communication.
My opinion on these matters is highly conflictual and critical: while rational knowledge is acceptable and desirable, it must not prevent people from ‘feeling’ and experiencing reality. That's all the more true when speaking of artistic endeavours. The risk of setting-up for ourselves an utterly synthetic world in the name of a kind of body-less notion of aesthetics leads in fact to the opposite: the body is ‘anaesthetized’, not empowered (as some people claim, instead). The triumph of re-presentation kills all presence. Let me say it, ‘too much aesthetics anaesthetizes’.
FP: What would happen to your works if one day there were no more possibility to perform it in a socially shared space? Where could it migrate, and how could it reconfigure itself?
AdS: If one day there were no more transducers (I mean microphones, loudspeakers, the tympanic membrane of human ear, even the skin maybe…) acting as interfaces between air pressure waves and nervous-electrical measures, my work and the work of a lot of other people would stop existing, it would cease. Fine so! It happened so many times in history. The music of the British virginalists, a few centuries ago, disappeared because of the extinction of their very instrument (the virginale, existing in several fashions across Europe). Then, just like it happens today with Renaissance music, at some point so-called 'philologically informed' interpretative approaches would be proposed, and these older technologies would be revived and again built.
FP: A last question. What is the Utopia of your work?
AdS: …mmmhh… hard to say. Well, actually there is one thing! For quite some time I have beein living with this fixed idea in my mind, just a concept for the moment, as I don’t possess enough competence to make it real. I envision a sound-generating device capable of producing, beside sound, the electricity that is needed to sustain itself as a sound-generating device. A kind of ‘aural living being’ which, through a closed circle, would use its own vibrations, or the air vibrations it causes, to allow for the power supply necessary for its own function. This ‘aural being’ wouldn’t probably be musically very interesting, it would be a kind of ecologically self-sufficient or self-sustaining device. I sketched a little drawning of this concept a few years ago, in Berlin, as my signature on a friend's guestbook, as a gift. Your question makes me think that that's a kind of Utopia underlying my work. Yes, the draft lays in a Folkmar's guestbook, somewhere in Berlin! And it's not a draft of musical Utopia…
Ben Burtt, who helped preserve San Anselmo history in films, has been named a Silver Award winner. Burtt documented the local floods of 1982 and 2005 in “The Great Flood of 1982” and “12/31.” He says, he “collected footage from 15 or 16 different people and added that to my own. The film shows people being rescued, basements being flooded and so on.”
Burtt, who says he’s always had “a big interest in history,” first became interested in San Anselmo’s while working on the first “Star Wars” film with George Lucas. He became fascinated with pictures he found in the local library that showed some of the changes the town had gone through.
Burtt also made a lengthy audio recording of the town’s historical walking tour, complete with sounds of old San Anselmo. That recording, a celebration of the town’s 2007 centennial, is available for download from the museum’s website.
Before Skywalker Sound there was Lucasfilm's Sprocket Systems, which opened in 1979 at 321 San Anselmo Avenue in San Anselmo. Film editors worked upstairs on "Raiders of the Lost Ark" and "The Empire Strikes Back" while sound editors downstairs worked on "Alien" and "E.T." Kentfield resident Pat Walsh, discovered while shopping at Seawood Photo, provided the voice for the loveable alien, with the help of actress Debra Winger. Sprocket's parking lot was also notable: It's where Harrison Ford practiced snapping a bullwhip for his role in "Raiders." Sprocket's work on "The Return of the Jedi" came to a halt when the Flood of 1982 ruined equipment. The division moved to Lucafilm's Kerner complex in March of that year.
'The Prisoner of Zenda' - Playing at the Rafael Film Center on June 13th, 2010 -
Craig Barron, Oscar®-winning visual effects supervisor and Academy governor, and Ben Burtt, Oscar-winning sound designer, will introduce a rare screening of the 1937 adventure-romance 'The Prisoner of Zenda'. Burtt will demonstrate how the sound effects from the film's famous swordfights were achieved.
[click the image to visit the TS3 page - via iTunes Movie Trailer]
Lee Unkrich is a longtime member of the creative team at Pixar, where he started in 1994 as a film editor. Unkrich graduated from the University of Southern California School of Cinematic Arts in 1990. Before joining Pixar, Unkrich worked for several years in television as an editor and director. He came to Pixar on a temporary assignment during the development of Toy Story and he later moved into directing, as co-director of Toy Story 2, Monsters, Inc., and Finding Nemo. His upcoming work Toy Story 3 will be released on June 18, 2010.
On Monday, Sept. 7, 2009 the 66th Venice Festival hosted a Pixar Animation Master Class on storytelling. Here's an excerpt of the speech by Unkrich during the special panel on animation.
[George Lucas presents the Golden Lion to John Lasseter]
"What is the difference between editing live action and editing animation? During live action you shoot first and then edit together after you shoot. But in animation, we edit first, then create the footage. Or more accurately we edit, before, during and after the film is shot. The best analogy I could come up is that editing an animated film is like building a sand castle. one grain at a time. Thousands of bits to be assembled. We have to focus on innumerable details, frame by frame, moment by moment, as well as keeping an eye on the big picture and how the movie is playing as a whole.
At the end the editor’s responsibility is the same in live action and animation. He manages the narrative structure and flow of the film as well as orchestrates the sound and visuals to create an engaging experience for the audience.
Editing animated film begins with a first tiny piece and that piece is a drawing, a storyboard, that first drawing is followed by a second, a third… one hundred thousands storyboards. The storyboards are edited together in a story reel. It’s a working document, an extension of the writing process, it’s a way how to judge how the film is flowing. A story reel done well provides the audience with a nearly completed experience that its only going to get better when the film is animated.
The editors first task is to sequel all those hundreds of thousands of storyboard together. To give the appropriate timing in the different moments of the film." - Lee Unkrich
[Lee Unkrich editing Toy Story 3 on Avid Media Composer]
"Eventually the real roles of the film will be cast with real actors. The temporary dialogue (performed by people we work with at Pixar) is then replaced by the performances of real actors. In addition to editing vocal performances we will also edit sound effects and we do this around the scene in a sonic environment and give it more texture and believability. It takes a lot of time to do all the sound work, but I think it is very very important to help to keep the audience wrapped up in a spell like the feel to watch a real film. We also take music from existing movie soundtracks and we’ll edit that music against the scene.
So even if the music in the finished film will be different than what we have put in, the tap music should support the desired pace and emotional tone of what we are going for. The wrong piece of tap music can ruin hundreds of hours of meticulous editing, but the writing of music can elevate even the simplest crudest story reel to poetry. When all combined together with dialogue and sound effects hopefully we'll have a good sense of how the scene is playing. If all goes well the scene will end up in a finished film." - Lee Unkrich
[Scoring with Randy Newman and the Toy Story 3 Orchestra]
"Animation is actually combination. Is the voice actor plus the animator. They create the character. In early exploration in animation we sometimes take lines of dialogue from movies with actors we are interested in. The choice of the actor really does affect the personality of the character. So in the voice casting is so important to get an actor who can be, first of all we always get actors that are themselves. Not care how big the star they are , the important thing is how good an actor they are to get the action.
The other thing we look for is improvisation, we want the actor to make the part their own. There is no better improvisation actor than Tom Hanks. In animation a very little detail is worked out and thought trough in advance so the thing you don’t often get is spontaneity. So the place where we can get spontaneity is the recording session." - John Lasseter
"I’d like to drill in a little more detail into one aspect of cutting which is particularly close to me and that’s dialogue editing. It is a vital part of editing especially in animated film, but in the end it is usually completely transparent to the audience. The vocal performances are reported for over several years and the actors are very rarely in recording studios together. That’s why the editor has got to all these different performances and edit them together to create the illusion of spontaneity and real action." - Lee Unkrich
[Lee Unkrich and Tom Hanks at Woody sessions on TS3]
"Many of the lines are the price of many little pieces of several different takes. Some of them recorder at multiple recording sessions. One of the luxuries in editing for animation is that we have a degree of control over how to shape the actor’s performances that is really nearly impossible in live action. So for a single performance can comply with different multiple takes from months apart.
It is like I am cooking, I am trading the finest and freshest ingredients from all the performances of the actors and reducing them to a richest demi-glace. How does an editor know which pieces are the best pieces? Well, in the end the most important tool in the editor’s arsenal is his or her intuition. There is no better game than if the film feels right. It is not as easy to explain your intuition or defend it to anyone but a great editor a film maker in general needs to listen to their intuition and learn to trust their intuition. Editing is not about deciding when to cut but often when not to cut. Sometimes the most powerful moments come from the editor just sitting back and allowing to see it to play." - Lee Unkrich
At ShoWest 2010, Dolby announced it is working with Walt Disney Pictures and Pixar Animation Studios to deliver a new audio format, Dolby Surround 7.1. Disney and Pixar have stated that Dolby Surround 7.1 will be launched in select theatres with the release of Toy Story 3 in 3D this June.
Dolby Surround 7.1 provides content creators four surround zones to better orchestrate audio channels in a movie theatre environment. The four surround zones incorporate the traditional Left Surround and Right Surround with new Back Surround Left and Back Surround Right zones. The addition of the two Back Surround zones enhances directionality in panning 360 degrees around the theatre. Dolby Surround 7.1 format comprises 8 channels of audio and has the following channel layout: Left, Center, Right, Low-Frequency Effects (LFE), Left Surround, Right Surround, Back Surround Left (new), and Back Surround Right (new).