Sound Researchers and Practitioners Convene to Discuss 'symbolic sounds' at #KISS10
Can music and sound-effects be symbolic?
Sound designers, composers, sound artists, film makers, researchers and others sharing an intense interest in sound will be convening in Vienna (Austria) from the 24th through 26th of September to discuss this and other, similarly controversial, issues at the annual Kyma International Sound Symposium (#KISS10).
Lively discussions, electronic/computer music performances, interactive installations, demos, workshops, and paper sessions will be held in the newly renovated Casino Baumgarten, a spectacular ballroom built in 1890, and the Rhiz Bar Modern, a showcase for experimental media in Vienna.
[view from the Stage at Casino Baumgarten]
Join in the free on-line discussion
Anyone having an intense interest in sound is invited to join in the free on-line discussion at pphilosophyofsound.ning.com. Sign in and introduce yourself by sharing your earliest sonic memory. Discussions on sound, symbol, and meaning have already begun and will continue during and after the symposium. You can participate on line whether or not you are able to attend the symposium in Vienna.
To register for #KISS10:
The registration deadline for participating in #KISS10 is 13 September 2010. The participation fee of € 90 (€ 40 for students) includes the 3-day symposium, morning/afternoon refreshments, and 2 evening concerts.
Background
The Kyma International Sound Symposium (KISS) is an annual conclave of current and potential Kyma practitioners who come together to learn, to share, to meet, to discuss, and to enjoy a lively exchange of ideas, sounds, and music! Kyma is a sound design environment created by Symbolic Sound Corporation. This year, the KISS symposium is being organized by Wiener Klangwerkstatt in cooperation with Symbolic Sound and with the support of Analog Audio Association Austria, Preiser Records, and the Austrian Federal Ministry of Science and Research.
by Federico Placidi - U.S.O. Project, June 2010 / Eng: Valeria Grillo
The works of Agostino Di Scipio include compositions for instrumentalists and electronics and sound installations. Some of these explore non-conventional approaches on the generation and trasmission of sound, including a special focus on phenomena of noise, turbulence and emergence. Other works implement dynamical networks of live sonic interactions between performers, machines, and environments (e.g. his Audible Ecosystemics project).
FP: Let's talk about your early works, before the 'ecosystemic' paradigm. How different was it from your more recent work?
AdS: Well, I had a very early phase when I gathered as much knowledge as possible about computer music techniques and digital signal processing. That included a special for algorithmic composition, too, the question being how I could formalize musical gestures of use in writing for either usual music instruments or electronics. In retrospect, I view that time as one of broad explorations in sonic materials, which eventually took me, later on, to focus on granular, textural and noisy materials, of a kind I later described as "sound dusts". It took me, in short, to micro-composition, i.e. to focus on the finest temporal scales in sound - with various degrees of densities and consistencies among sonic grains or particles. The idea was that micro-composition would let macro-level, gestural properties emerge at larger time scales. I tried to determine a process in sound in a way that lower-level processes would bring forth larger sonic gestures.
Along this path, I even developed new synthesis techniques based on the mathematics of nonlinear dynamic systems, as found in the so-called 'chaos theory' - that was end of the 1980s and early 1990s, and chaos (or better: the mathematics of nonlinear dynamical systems) was not as popular as it later became. I came across it and studied it quite in depth for some time.
Now, all those efforts usually provided me with sound materials for studio works. But at some point I felt a need to find my own way into live electronics performance. I had stayed far removed from that, because I was completely dissatisfied with how live electronics, including real-time interactive computer music, was approached at the time. Or, at least, I didn't want to follow those paths...
FP: What made you unhappy with extant approaches?
AdS: It was mainly because of the obvious linearity in the unfolding of time. And the fact that peculiar electroacoustic possibilities and artifacts - for example Larsen tones (feedback sounds) - were exclusively understood as a problem in audio engineering, stranger to the wanted sounding results. I felt they could rather be taken as the very resources, to be controlled and exploited in a truly live electroacoustic situation. I was dissatisfied with the usual notion that the technology was there to 'neutrally' represent and convey musical signals, as if the tools and their hydiosincracies were not part of the experience; instead of pretending to set them aside, I felt they could be studied and turned into sonic resources, the medium itself of experience. Then, and maybe more importantly, I realized that the processes I was dealing with in a formalized manner, were abstract models rid of any surrounding space, separate from any source of noise and risk in their own unfolding. They were models, as it was, not the thing itself. That confined me to performance as representation, as the (imperfect) replication of an ideal. I realized I could use the electroacoustic equipment and computers to implement dynamical processes, to make real nonlinear systems, exposed to ambience noise and the hydiosincracies of the electroaoustics, exposed to these sources of uncertainty and change. No more software modelling of abstract dynamical systems, but the implementation of a self-regulating sounding system in contact with the surrounding environment. In a way, this was a move from models of existing, and usually extra-musical, systems or processes, to the design of a kind of living sound organism in contact with the surrounding space, one that grasp in that space the energy necessary to stabilize itself, grow and change.
So, that was how I moved on towards my more recent work. Some compositions are a significant testimony of this journey. For example, take the first string quartet (5 difference-sensitive circular interactions, 1997-98): it already had a strong relationship with the surrounding space, although clearly the instrumental material still had a predominant role. Another work, Texture-Multiple (started 1993), for small ensemble and electronics, represented for me a kind of a long-lasting workshop (it didn't know a final version until recently); born of a sketch for a smaller scale work (Kairos, with saxophone and electronics, 1992), it soon 'conquered' the 'collective', the ensemble dimension (3-6 instruments), and then, with the mediation of real time computer processing, it took contact with the surrounding space (since 1994). At each new performance of that work, I would try ideas concerning the interactions between human performance, machinery, and space, that would later become central to the 'ecosystemic' pieces.
FP: The two works you mentioned, call for a kind of multiple-level interaction: on the one hand, we have a written score which remains quite flexible in terms of how one plays the notated materials; on the other hand, there is a programmable DSP unit transforming all instrumental sounds; and furthermore, we have the room, whose resonances to the music somehow drive the computer processing that in turn affects the interactions among instrumentalists. So all components, one way or another, are constantly transforming each-other, in a rather flexible or elastic temporal and spatial dimension.
AdS: Yes. Let's pick Texture-Multiple as an example. The instrumental material is notated in separate, independent instrumental parts. These are similar among themselves (not identical, each is tied to the specifics of the particular instrument), so actually all instrumentalists involved play the 'same' thing, in slightly different and flexible manners, seldomly in synch. When the performance starts, the instrumentalists are not really an 'ensemble', they are separate individuals, no sense of community. The role of electronics consists in bringing to their attention, as they play, the higher or lower degree of communion of their independent intents. The computer alters their sound, to different degrees, depending on features of the instrumental gestures. In turn, depending on the sonorities the computer gives them back, the instrumentalists may get to know better whether they are acting together or not, and accordingly change their playing, following simple interaction rules. So gradually a sense of 'ensemble' takes shape throughout the piece. The unity of the members is not given for granted, is not pre-determined, it comes forth as they actually play, with the mediation of the electronics. I must add that, to some extent, the computer processing is also driven by specific features in the total sound in the room space; 'space', here, is an ineliminable liaise of the relations among the involved players. For some people, the compositional process in Texture-Mulitple is reminiscent of Christian Wolff's music. However, different from Wolff, the electronics intervenes to alter the instrumental sound, making a larger sound texture dynamically depending on the peculiar resonance of the surrounding space, thus emphasizing the active, or pro-active role of the room acoustics in the gathering of the ensemble community. An important implication, that is not found in Wolff music, is that, for the good or the bad, here human relationships are profoundly mediated by the technology. (Which is what happens in our daily life, nowadays).
FP: If I remember correctly, there is a 'attraction point', a high F-sharp, and the closer the instrumentalist get to that pitch, the less their sound is subject to electronic transformations.
AdS: Yes, that's a fair approximation. The F-sharp is quite frequent in the six instrumental parts, so it fills the resonant space. If it grows too much, and the room acoustics reinforce it, the comptuer will 'avoid' getting more of it. That way, a bit of information that is relevant to the piece, doesn't saturate (intended symbolically and musically, not signal-wise) the surrounding space.
FP: I remember well, it regulates the level of the signal sent to the recording buffer.
AdS: Indeed, a more general idea I work with in that piece is: the louder the instrumental material resonating in the room - or, the denser and quicker the instrumentalists' gesture - the more heavily processed, 'granulated' ('spliced-up' in tiny bits) and finally 'evaporated' and thus attenuated, the sound flow from the computer; until the moment where grain density is so small that there is sound no more. The score binds the overall process to an overriding direction, so I know that sooner or later there will be the sense of 'communion' or 'common intent' that we were mentioning. And I know that sooner or later the ensemble members will play loud enough to inhibit any further electronic processing and reduce the computer sound to silence. To some extent, in that work the score notation predetermines an overall narrative: when a sense of ensemble is finally achieved, the total ensemble sound may be strong enough as to silence the computer, but that very event negates the mediation they were leaning on. However, one can't really say how the network of interactions will eventually develop during the performance, but one can be sure it will come to achieve the goal. Like in other works of mine, a rather open interaction network operates under the spell of a higher-level force or guide (in this case, stipulated by the composer himself in his notation).
FP: And how is the musical writing conceived of, in such a case? Is it linear? Does it develops in time? Do you read it from left to right?
AdS: In Texture-Multiple you do, yes. Except for local loops, repeats whose variable duration depends on local intentions on the part of each of performer. In large sections of the string quartet score, you read from left to right, as you say; at a certain point, though, the four guys settle on material that has to be reiterated (again a loop-based notation, but on much larger spans). At that point, their playing techniques vary depending on what they hear from the electronics - and what they hear is an articulated texture arising from the computer processing of their own sound. The four members have special rules of behavior: if they, on subjective basis, hear the computer processed sound as a rather dense and continuous texture, then they have to gradually slow their tempo down and decrease the sound level, until they come to silence, making the playing gesture but touching none the strings. Viceversa, if they hear a sparser texture, they keep playing, adding more material, eventually decreasing in level, but speeding up the tempo (that will provide the computer with more materials to work with, reinforcing the rather foggy or dusty texture). The idea is, local behaviors compensate for the activity of the system, 'system' being understood in its whole entirety, including human beings, space, electroacoustic apparatus. Simple local mechanisms may become a quite intricate net at a global level. It gets really difficult to hear out the specific agency driving the performance - who affects whom, who has the lead and who is led. You could speak of an 'emerging agency'. In cybernetics and complexity theory, one speaks of distributed causality. The performance system is no more decomposable into independent parts, it becomes a holon - using an awful term put forth by some scientists.
FP: Going back to sound, we could say that, with these works, we move from the usual situation where sound is raw material transformed or projected in space and time, to a peculiar situation where sound takes on an informational, communicative value, maybe more essential, and yet often subtle and feeble, as it eventually gets to thin noise events that verge on background noise, while at the same time affecting further gestures and developments in both the human and the machine components.
AdS: I agree with that. Sound sets the conditions of its own existence and development. Which actually brings us to the works in the Audible Ecosystemics project (2002-2005), where probably the approach is more fully crafted. The project includes a variety of concert piececs and sound installations. More recent works are like further extensions of that project (e.g. a series of works named Modes of Interference). Sometimes I think that the words Audible Ecosystemics refer more to a way or attitude of making sound art, and less to a set of pieces. Anyway, we are talking of live electronic solo works, i.e. performed with specific live electronics set-ups, no music instrument involved. They explore sound materials existing in the given room, such as background noise, in different ways. Or Larsen tones (i.e. the accumulation of ambience noise mediated by the room acoustics and the electroacoustics utilized) deliberately caused by the 'electronic performer', i.e. the person or people in charge of preparing the equipment and tweaking the overall audio infrastructure. In the Background Noise Study (Audible Ecosystemics n.3a) you start from this 'nothing musical' (background noise) and make something out of it. If this 'something' is interesting and keeps your attention, then it can be defined as music. However, that chance is never granted beforehand: the performer does his/her best, particularly in the rehearsals, in order to establish a sufficiently varied system dynamics, the crucial rerequisite for something of interest to happen; yet, the variables are so numerous - the audience walks in after reheasals, so the room acoustics change; there might be some accidental sound events in the room, or from outside, that were not there before… you know, all such marginal circumstances can modify how the performance turns out in the end. Now, strictly speaking that is not a problem. My directions as a composer, and the performer's own skills, cope with such circumstances and strive to ensure that some music eventually emerges thanks to such accidents. In fact all that is there prompted to make that happen is purely potential, the performance itself has to turn it into actuality, and that is only possible by feeding the process with some little energy, however musically insignificant.
I do have some works of a rather different kind. Take the Book of Flute Dynamics (also known in Italian as Per la meccanica dei flauti). It is not based on the kind of sonic inter-connections we have mentioned between the electroacoustics, the musical instruments and the room acoustics. And still, it is another an example of how you can work with the usually unwanted, in the particular case, with several small noises a flutist makes never blowing into the flute, simply holding the instrument in her hands and lowering the keys, etc. Again, attention is turned to residual, with hardly-noticeable noise, understood as artifact traces left by human interaction with a piece of (mechanical) technology, the instrument, and work with that. A bit like in the Background Noise Study in the Vocal Tract (Audible Ecosystemics n.3b), the method may be different, yet the purpose is obviously of a similar kind: in the flute work the ‘space’ is a small tube of varying length, manipulated with a finite set of keys and fingers; in the Audible Ecosystemics works, on the other hand, the physical space, itself mechanically and culturally connotated, consists in the room environment where we set to present the work. In the just mentioned Background Noise Study in the Vocal Tract, the idea is to experience the resonances and unwanted noises of a smaller but changing room (a mouth) and the resonances and unwanted noise in a larger room (concert hall).
FP: How do you experiment and gradually finalize an 'audible ecosystem'?
AdS: Things often are born out of trial and errors. It can last quite a long time. Say, I start with an idea about how sound should originate and develop, setting the minimum requirements for some sound to be there instead of silence.
Based on that, I slowly shape up a process that, beside bringing forth some sound, articulates the thus generated sound in time. This requires extensive experimentation. I assemble a small-scale set-up in my own studio, smaller than the one eventually to be set in a concert hall or performance space, and live with it for months, trying it, listening, refining, testing it under different conditions, technical and environmental: during the day and the night, with open or closed windows, more or less cars or else in the distance, voices or birds in the street, the plane passing by, the telephone ringing, the neighbors cheers, etc. And then different microphones and microphone placements, maybe soldering some speaker or piezo, and certainly refining the software, etc. Up to a point where I can see that the process works (sustains itself) and changes (generates significantly varied sound textures and patterns). Being satisfied with it, doesn’t mean that the whole thing works in a musically, aesthetically rewarding way: it simply means that it shows the ability to self-regulate for some time in an autonomous way, such that, by feeding it a little noise, it can behave in a non-destructive way.
Now, that’s empirical evidence of what I mean saying that music is never there before you make it, or that music doesn’t exist until it emerges to existence. It may sound as a philosophical statement about music, a statement from a nonobjectivisit and constructivistic perspective, yet it's very practical, too. As a general criterion, I’m happy with the process that I set-up when, as a result of the inherent system dynamics, it unfolds through as many as possible system states: perceptually that means that you have a variety of textures of changing density, with several degrees of tactility to them, with internal timbre variations, changing across frequency regions, variations in micro-rhythmical (granular, random or patterned) activity. Things are probably at the most clear when listening to a performance of Feedback Study (Audible Ecosystemics n.2a). Let me explain. I don’t usually pay too much attention to musical pitch and pitch structures. But in Feedback Study I can hardly avoid a sense that pitch is important, because the only sound generating device is there a, is a feedback loop causing Larsen tones, often coming with a clear pitch quality. Their frequency (or frequencies, as sometimes they come in clusters) depend on roon acoustics and the mic-to-speaker distance (as well as on the mic and speaker characteristics in electro-acoustic transduction). Therefore, one of the things in this work is how the system dynamics allows for developing different harmonic fields based on the Larsen tones: the variety and the redundancy in frequency regions and pitch relationships project the system dynamics in the dimension of pitch, making it audible to the ear.
At a different level of discourse, the general idea behind such works is that the identity of a work is captured in the array of determinate relationships and composed interactions, including the connection between microphones and loudspeakers, their placement in the room, how the software itself works and what role the performers take on as they handle the gear, etc. At the same time, the potential to express this identity lies in the variety of random stimuli coming during the performance from the hall, and other, often random, particulars of the available electroacoustic equipment. My understanding and appreciation of the results is less grounded in aesthetic evaluation, and more in the sense of a convergence or coming together of room, people involved, and the whole equipment (hardware and software).
I don't mean that the aesthetics of the sounding results is of little significance to me. Yet, the focus of experience is about this vergeance, this coming together of all too often reputed independent components. The network of interactions I devise is usually enough open to the surronding environment as to change in time, wander and develop; at the same time, it is closed-onto-itself enough to preserve its identity, retaining its structure notwithstanding the random events in the ambience. It gives something to the ambience, and it gets something from the ambience, in a truly structural coupling. Action and perception: two faces of the same coin, tossed in a determinate environment, where some random events happen. This mutual exchange is what we seek, it’s what we hopefully obtain during the performance, the richness. The goal is to provide an experience where everything is connected to every other thing, in sound. Nothing, in a given space, is foreing to sound and hearing. The ear knows that nothing is disconnected, that nothing is neutral to what it hears.
FP: Turning to technology, how relevant is for you the computer programming environment, its flexibility, its peculiarity? And what role does it play?
AdS: Well, first it is important to stress that "technology" here is more than the computer and the software involved. As should be clear, critical is the array of transducers, be them loudspeakers, membrane microphones, piezos, or other sensors (I am fond of the accelerometers I was allowed to work with, two years ago in Berlin, for Untitled 2008 - Soundinstallation in two or more dismantled or abandoned rooms). Not to forget the mixer console, which I tend to consider as a performing device. Essentially, all analog gear included is really crucial, so possibly one has to consider it part of one's own designs. And I am not necessarily referring to the quality of high-end professional equipment: even lousy speakers and cheap microphones can do a good job, if used in sensible and informed ways. What is important is your awareness of the role and function you assign to them as components in a larger infrastructure.
Now, as far as software is concerned, I use real-time DSP programming environments that allow me to develop a variety of automated functions. "Automated" is not the same as "predetermined", it means "able to extract information from the signal and to turn this data into variable control signals". I make a distinction between what is usually named "audio signal processing" and what I often refer to as "control signal processing". Sometimes my computer patches are more voluminous and complicated in their control signal processing subpatches than in the audio signal processing ones. Sound processing and trasformations can be kept rather simple and can still yield quite interesting results, if driven and articulated by properly shaped, "adaptive" control signals generated in real time.
In this regard, I find that Kyma is a truly remarkable computer workstation, that I have been using for 15 years now (fifteen!). It is extremely powerful, and the programming environment is efficient for both rapid prototyping, and for deeper programming technicalities. I also work with PD. In my "scores" (instruction booklets?), I usually document all necessary signal processing in a machine-independent notation, partly graphic, partly verbal, eventually referring the reader to well-known digital signal processing technicalities. That allows other people to re-create the algorithms using programming languages and computer systems they prefer. Baed on such documentation, works of mine have been performed by colleagues who work with software I tend to avoid (like Max/MSP). From what I can hear, the results are rather consistent with my own performances.
FP: It seems that you have a modular approach on preparing your codes, avoiding redundancy, and creating meaningful connections among extracted signal features, or their psychoacoustic equivalents.
AdS: I spend quite some time with designing feature-extraction algorithms. I have an ‘arsenal’ of them. Also important is the array of mapping functions from extracted data to control signals to apply in ways consistent with their perceptual reality. I change or refine these software modules, tuning them depending on context. I mean not only compositional or musical context, but also physical context. Suppose we have the computer track, during a performance, the most resonant frequency in the total room sound (via microphones). Suppose we use this data to attenuate that very frequency in the computer output signal routed to the speakers, thus compensating between input magnitude and output. One aim for that could be to avoid or limit strong feedback peaks. Now, if you go like that in a 10mt x 10mt room, the code you come up with may not be working in much smaller rooms. In the installation Stanze Private (ecosystemic sound construction), the rooms I work with are glass bottles and vessels, with volume ranging in the few squared centimeters. In which case, the trackers necessary to establish the inverse amplitude relationship I was describing, have to be tuned to work in a very peculiar way, their reaction time must be much shorter just because of mere physical dimensions and reflective properties of the room surfaces. The general criterion is the same (inverse i/o relationship), but it doesn't work regardless of dimension. The timing of the ‘followers’ or 'trackers' must be properly studied. Therefore, in general, at each new project I may re-cycle tools I have already developed, but depending on many factors, specific extensions, implementations or refinements are also necessary.
FP: Let’s go back to the performance paradigm: it is clear that your music - or your work more generally – lives on symbiotic relationships between the performance space, the sound events produced in it and the people that establish a connection with that space. That implies a social dimension, taht is essential to make the experience ‘sensible and REAL’. Nowadays, we live in a historical timeframe where the experience of the real is often replaced by a paradigm of the simulacrum, an extreme, sometimes violent virtualization of life, where the real space, the physical space, is depicted almost as a social problem. People prefer to interact through virtual social networks, more often than in the ‘real world’. This being so, I wonder: in a society where experience is becoming more and more virtual (literally reduced to purely binary information), what will happen of your works, which need a real venue and environment in order to exist?
AdS: I am myself interested in research dealing with how we perceive space and all that is around us in space. As I see it, that is done in two ways: either in observing what happens in the very moment of lived experiences, here-and-now; or via simulation tasks and technologies, pinning on as many aspects of perception as possible among those having a role (biological, biocybernetic, ecological aspects). The latter approach, pursued among the tidy walls of research labs, leads to virtual reality. Thanks to it, it is possible to expand our knowledge of what is and what isn’t relevant to the process of human perception. Now, in my view, it is important not to misunderstand the data we gather from the scientific approach for unique and unambiguous representations of reality. They only add a bit of rationalistic analysis of what it means to be living beings. It may be great to be able to synthesize virtual spaces and use this technology to ‘travel’ to inexistent spaces, or anyway to spaces other than the space where the living body is. But, again, that's only good in order to gain a bit of knowledge on the organism’s behavior. Once we have that knowledge, we must turn to real life, and see how it can be a relevant part of lived experience. A ‘fake’ life, a virtual dimension may be ok for entertainment purposes - with entertainment mass-media, we always shift from perceiving the content of human communications, to perceiving the medium presumed to communicate that content. That is only interesting if you strive to appropriate the medium, or even to design the medium. Otherwise it is of little interest and even verges on the totalitaristic, when sold as the predonominant or exclusive manner of human communication.
My opinion on these matters is highly conflictual and critical: while rational knowledge is acceptable and desirable, it must not prevent people from ‘feeling’ and experiencing reality. That's all the more true when speaking of artistic endeavours. The risk of setting-up for ourselves an utterly synthetic world in the name of a kind of body-less notion of aesthetics leads in fact to the opposite: the body is ‘anaesthetized’, not empowered (as some people claim, instead). The triumph of re-presentation kills all presence. Let me say it, ‘too much aesthetics anaesthetizes’.
FP: What would happen to your works if one day there were no more possibility to perform it in a socially shared space? Where could it migrate, and how could it reconfigure itself?
AdS: If one day there were no more transducers (I mean microphones, loudspeakers, the tympanic membrane of human ear, even the skin maybe…) acting as interfaces between air pressure waves and nervous-electrical measures, my work and the work of a lot of other people would stop existing, it would cease. Fine so! It happened so many times in history. The music of the British virginalists, a few centuries ago, disappeared because of the extinction of their very instrument (the virginale, existing in several fashions across Europe). Then, just like it happens today with Renaissance music, at some point so-called 'philologically informed' interpretative approaches would be proposed, and these older technologies would be revived and again built.
FP: A last question. What is the Utopia of your work?
AdS: …mmmhh… hard to say. Well, actually there is one thing! For quite some time I have beein living with this fixed idea in my mind, just a concept for the moment, as I don’t possess enough competence to make it real. I envision a sound-generating device capable of producing, beside sound, the electricity that is needed to sustain itself as a sound-generating device. A kind of ‘aural living being’ which, through a closed circle, would use its own vibrations, or the air vibrations it causes, to allow for the power supply necessary for its own function. This ‘aural being’ wouldn’t probably be musically very interesting, it would be a kind of ecologically self-sufficient or self-sustaining device. I sketched a little drawning of this concept a few years ago, in Berlin, as my signature on a friend's guestbook, as a gift. Your question makes me think that that's a kind of Utopia underlying my work. Yes, the draft lays in a Folkmar's guestbook, somewhere in Berlin! And it's not a draft of musical Utopia…
Ben Burtt, who helped preserve San Anselmo history in films, has been named a Silver Award winner. Burtt documented the local floods of 1982 and 2005 in “The Great Flood of 1982” and “12/31.” He says, he “collected footage from 15 or 16 different people and added that to my own. The film shows people being rescued, basements being flooded and so on.”
Burtt, who says he’s always had “a big interest in history,” first became interested in San Anselmo’s while working on the first “Star Wars” film with George Lucas. He became fascinated with pictures he found in the local library that showed some of the changes the town had gone through.
Burtt also made a lengthy audio recording of the town’s historical walking tour, complete with sounds of old San Anselmo. That recording, a celebration of the town’s 2007 centennial, is available for download from the museum’s website.
Before Skywalker Sound there was Lucasfilm's Sprocket Systems, which opened in 1979 at 321 San Anselmo Avenue in San Anselmo. Film editors worked upstairs on "Raiders of the Lost Ark" and "The Empire Strikes Back" while sound editors downstairs worked on "Alien" and "E.T." Kentfield resident Pat Walsh, discovered while shopping at Seawood Photo, provided the voice for the loveable alien, with the help of actress Debra Winger. Sprocket's parking lot was also notable: It's where Harrison Ford practiced snapping a bullwhip for his role in "Raiders." Sprocket's work on "The Return of the Jedi" came to a halt when the Flood of 1982 ruined equipment. The division moved to Lucafilm's Kerner complex in March of that year.
'The Prisoner of Zenda' - Playing at the Rafael Film Center on June 13th, 2010 -
Craig Barron, Oscar®-winning visual effects supervisor and Academy governor, and Ben Burtt, Oscar-winning sound designer, will introduce a rare screening of the 1937 adventure-romance 'The Prisoner of Zenda'. Burtt will demonstrate how the sound effects from the film's famous swordfights were achieved.