Showing posts with label Sound for Picture. Show all posts
Showing posts with label Sound for Picture. Show all posts

Tuesday, November 19, 2013

Gary Rydstrom - Unexpected Sounds

Here's my guest post "Gary Rydstrom’s Talk from VIEW", originally published on Designing Sound


Last year, Skywalker Sound’s Gary Rydstrom, joined for the first time the roster at View Conference – Italy’s leading computer graphics symposium – following Randy Thom in 2011. His feature presentation was called UNEXPECTED SOUNDS. The 19th October 2012, inside the Cavour Hall, at the conference center “Torino Incontra”, professionals and sound lovers coming from different countries, gathered to be part of this unique moment.

Now, we have the opportunity to present here at Designing Sound, the polished session of Gary at VIEW. The recording I made directly from the console to my Zoom H4 has been possible thanks to Maria Elena Gutierrez, director of the Festival. In this one-hour podcast, Gary talks about the art of sound design and illustrates how sound is being used and how it can be better used to tell stories. You’ll listen to many examples of raw sound effects recordings, designed sounds, and mixes from Pixar films, and live-action films such as “T2,” “Jurassic Park” and “War Horse”.

Enjoy… and thanks Gary for your invaluable experience!

Matteo



A special thanks to Shaun Farley and Peter Albrechtsen

Friday, September 20, 2013

Yuri Esposito @ 70th Venice Film Festival

by Matteo Milani - U.S.O. Project, 2013

Yuri Esposito is a film that I recently supervised (as sound designer and ambient music designer) which got its world premiere during the 70th Venice Film Festival, with the support of Biennale College – Cinema. "Yuri Esposito" is a man whose slowness pervades every action in his life and comes to constitute its essence, but his perennial lethargy is challenged by a surprise paternity. It's directed by Alessio Fava - I previously worked with him on the ironic science fiction short film “Project Genesis” set in a future in which the machines give life to human beings (you can watch it here).

 

Press quotes: 


"The slow Yuri is a descendant of such minimalist screen clowns as Buster Keaton and Pierre Etaix; and Fava’s assured directorial sense touches on the signal difference between Hollywood movies and art films: pace. The slow Yuri is an art film in the bustle of a multiplex world". - Richard and Mary Corliss @ TIME.com

"The Italian film, “Yuri Esposito” — really stands out as a commercial entity. At yesterday’s panel, Stephanie Zacharek of the Village Voice praised it, and Richard Corliss of Time Magazine said it was one of the four or five best films in the entire festival. I like it very much as well (...) It’s funny, sad, a character study, a fable . . . and a very good movie". - Mick LaSalle @ SFGate.com

"That quite fragile balance between comedy and drama lead to a true sentimental and somehow bittersweet film. The linear narration, despite the initial idea, doesn’t boast of creativity but it is worked nicely and fulfills all the initial expectations for a charming film". - 24 fois la vérité par seconde24 

[read more reviews and articles - via Biennale College]

Listen to my original tracks composed for "Yuri Esposito":

Sunday, May 26, 2013

Out now: Kyma Ambiences - vol.1 [USO003]

 
Kyma Ambiences [USO003] is the third sound effects bundle created by Matteo Milani (U.S.O. Project). The generation of these "Artificial Reality Ambiences" starts entirely in Symbolic Sound Kyma - during the development of projectgenesismovie.com - from the processing of white and pink noise by filtering - in the time/spectral domains - and convolving these sources with custom FM, additive, formant and granular synthesis.

The composition’s resource of sound material is drawn solely from these processed stochastic sound elements: coloured noise is a raw material already full of life and can be sculpted into a variety of temporal forms, movements and textures. The interaction with Kyma was typical of a composer who explored a device’s potential for sound transformation like a musical instrument. The goal was to obtain an organic and acoustic quality using only a restricted sound source, in order to evoke real spatial characteristics attached to each invented sound.

The sound effects collection is published @ 96kHz (native), plus a budget version @ 48kHz (resampled). “Kyma Ambiences vol.1” is not only available in these two packages, but also as dual-layer separated “Elements”, suitable for recombinant stratification, varispeed and spatial positioning in the surround field (for a total of 112 files @ 96kHz).

Here is what you get in “Kyma Ambiences vol.1”:
  • Stereo Interleaved Files (56 items, duration 120s each)
  • Comma-separated values file (.csv)
  • Excel spreadsheet (.xls)
  • License Agreement (.pdf)
  • Artwork (.png)

Three flavours:     

48 kHz (small): $ 49 - via PayPal  

Audio Format: Broadcast Wave Files (.wav)
Bit Depth: 24-bit  Size: 1.97 GB
Download size is 1.50 GB (compressed .rar archive)

96 kHz (medium): $ 79 - via PayPal 

Audio Format: Broadcast Wave Files (.wav)
Bit Depth: 24-bit  Size: 3.93 GB
Download size is 2.78 GB (compressed .rar archive)

Elements (large): $ 99 - via PayPal    

Audio Format: Broadcast Wave Files (.wav)
Sample Rate: 96 kHz
Bit Depth: 24-bit  Size: 7.86 GB
Download size is 4.99 GB (compressed .rar archive)

Choose your size:

Monday, May 13, 2013

An interview with Mel Wesson

by Matteo Milani - U.S.O. Project, 2013

Mel Wesson received a multi platinum award for his contributions to The Verve's 'Urban Hymns' album and the anthemic single 'Bitter Sweet Symphony'. It was this album Hans Zimmer was listening to in the New Year of 2000 when he spotted Mel's credit and invited him to work on the score for Mission Impossible 2.
Since that time Mel has created his own niche within the movie score genre as 'Ambient Music Designer'. This area of atmospheric sound has weaved its way through many of Hans' scores including Ridley Scott's 'Hannibal' and 'Black Hawk Down', Christopher Nolan's 'Batman' trilogy and most recently Ron Howard's 'Rush', amongst the others.

[photo courtesy of Mel Wesson]

Matteo Milani: Mel, how you got involved in working on motion pictures?

Mel Wesson: Aside from a few piano lessons as a child I had no real formal music training, I learnt most about my approach to music at Art College...  I learn't about keeping an open mind, freedom of expression, things that have stayed with me all my life. I spent my  youth playing with bands, touring and recording. I started getting offers of session work, one thing led to another... I'd known Hans Zimmer since my late teens and he got me involved in a few projects with his mentor Stanley Myers, as well as some of his own early musical projects. Hans and I drifted apart for a few years when he moved to LA and I got more involved in working with recording artists but eventually Hans called out of the blue and asked me to get involved in the score of 'Mission Impossible 2'. That was the start of a new chapter for me.


MM: Would you like to describe your collaboration between composers James Newton Howard and Hans Zimmer?

MW: They're both very different, Hans prefers everything to be in a constant state of flux, whereas James has a more structured approach. I seem to have an approach that works with them both, I'm pretty flexible but I do like make progress through a picture so often I'll leap ahead of wherever  they are in the in movie and  then feed ideas back at them. James often likes to arrive at a cue and find something in place, it might be a soundscape consisting of many cues, or perhaps a rhythmic idea, maybe even a map. Hans tends to assimilate my ideas in his own way, a lot of time things come together at the mix as opposed to the more traditional point of composition, that works for him as he never considers a cue finished until it's in the theatre!


MM: Please explain your role as 'Ambient Music Designer' when working with the composer and other members of the sound editorial.

MW: Well, I really should know the answer to that one but I'm still working on it! The AMD role came about through a few conversations with Hans on 'Hannibal'. People read things into that title, but it's really just a phrase we cooked up to give me a credit on that movie and it stuck! The important thing is I just didn't want to go down the orchestral route (despite what people may think I DO write conventional music occasionally! ) and Hans gave me the opportunity to experiment with sound in a way that crosses the boundaries of music and sound design. A lot of my work is to do with atmospheres, creating a presence, emotions, sometimes through rhythm too. I create bespoke sounds but that's only a part of what I do. I use those sounds to work with picture, that's the real challenge here. The word 'Ambient' can cover a lot of ground in the same way the word 'Orchestral' covers an amount of options... Back to the question, occasionally a composer will have something specific in mind, but a lot of the time I'm left to my own devices and we see what occurs... I enjoy the freedom, it would be rather pointless for everybody if I didn't play  a creative role.


MM: How do you deal with the everlasting collision between sound effect and music?

MW: It's all noise... some noises work together, some don't. I try not to distinguish too much between violins and helicopters, everything has it's place... On Ron Howard's 'Rush' for example there's the most amazing sounding race cars... they're the sound of the movie, the heart and soul the story, yet they'll carve through any music... which is fine by me, I'd far sooner listen to them then an orchestra! Most recently I've been playing with the band 'Node', with producers Flood and Ed Buller, plus electronic artist Dave Bessell. What we do is all about sound, it's all live too, no overdubs, no mix process. The music we're creating crosses a lot of boundaries, there really is no conflict between sound and music in that environment and no one would draw a line between what we do and music. For me it's the ideal band to play with, I'm very excited about our album. 

[photo courtesy of Mel Wesson]

MM: The sounds you've done for 'Inception: the App' are these totally synthesised or is there any usage of reprocessed actual sounds?

MW:
Nearly all of my work on Inception, the score and therefore the App was based of reprocessed sounds, mostly from one sample session, but you'd struggle to recognise the source of the sounds in about 90% of the case. It was a session of natural instruments resonating through a piano soundboard, so we had an amazing amount of harmonics to work with plus the room itself. Then I took my ideas back to Air Lyndhurst and replayed them into the hall and re-recorded them again... we got some amazing material of out that session. There was some live recording in the app too.... like rain on my studio window that became part of the environment. There's been a Dark Knight App since that one too, the same team, but based more around a more interactive way of playing with the score in the real world.


MM: For 'Inception: the Soundscape', you’re credited as composer. It’s your first art installation work. Do you have any additional anecdotes to share?

MW:
That was played on the walk between Grauman's Chinese Theatre in Hollywood, (the US Inception Premiere) and the after show venue where we played a live concert. It was a walkway constructed a few hundred metres long and a really interesting project. I used a lot of ambience from the film, plus some sounds I got from sound designer Richard King, things like the train coming down the centre of the road, waves on the beach etc. It was a lot of fun, I'd love to do more... but really more long term exhibition based... and yes, I'm open to offers! Venice Biennale anyone?


MM: Tell us about the software and hardware production tools in your arsenal...

MW: My studio's based around Logic Pro, for now at least.... I love working in Logic but it's way overdue an update so I'm looking at alternatives. So... within the computer world I have a few favourite toys, MetaSynth has served me well in the past, Reaktor is probably still my favourite plugin, it's just so forward thinking, flexible, origional and most importantly it sounds good... that's  everything you want from a plugin. I use a lot of sounds manipulation devices, things from Izotope, Audio Damage, etc. I like the Waldorf plugins too, the Wave 3 is very good, but the most exciting thing I've seen in a long time is the PPG Wave Generator, it's an iPad app, again it's sounds great and it's innovative. I sometimes process sounds through my modular or the Synthi but I don't use a lot of outboard FX, although I've started using a Kemper Profiling Amp. Obviously that's designed with guitars in mind but there's no rules. I use a lot of vintage synths too, partly because nothing sounds as good and partly because the interface makes you think differently. I'm fortunate to have a large analog modular system that's part Moog 3C, part PPG300, other 'Go to' synths include a  Synthi A, PPG Wave 2. It's not just about the sound though, it's how these devices make you think. I just bought a guitar too, I don't really play, but again I come up with things I'd never think of on a keyboard or computer. 

[photo courtesy of Mel Wesson]


MM: What are your preferred titles you worked on and what kind of sounds you designed?

MW: Well all my sounds are organised within folders so there's the 'BatFlaps', for Batman Trilogy,  there was 'Rage' for the Joker, 'Ice Brass' for Inception. 'Gotham Metal', "BanePane'... it's a long list.  Of course we had  'MetaPiggies' on Hannibal!


MM: Can you reveal us a 'making of' of a very special sound effect(s) or a sound sequence
for 'Green Lantern'?

MW: Ah Green Lantern... we had a vast library of sounds, unusually for me all of my work was created with synthesisers,  We had there's the Green Energy sounds for the good guys and the sound of Yellow Light which was the sound of fear... We did record the extraordinary voice of Grant Gershon and I processed his voice to create various sounds and textures. We had so much fun on that movie, although it was pretty much universally slated which was a shame as the team was great as was the experience. 


MM: How do you deliver your sound elements to the mixing stage?

MW: I master everything in quad, but really all I do is give options, the real work in surround takes place on the dubstage. They have far more to deal with in terms of effects and dialog beyond the music and that team are experts in bringing it all together. My work is all part of the score, it's a common misunderstanding that I deliver stand alone sounds but for me the sounds are only a part of the process. Once I have my pallette I start to use those sounds to compose with, so the work is delivered as mixes and stems, the same as any other cue would be. That said Chris Nolan is very keen to have my material as wide as possible as what we called 'Melements', he likes to swap things around on the dub and experiment. For me it's always an exciting time and I love the way the process is constantly evolving.


MM: What projects are you currently working on?

MW: Well.... I'm a rarity in this industry in that I actively dislike have more than one project ongoing  at the same time... in an ideal world at least!  This year got off to a crazy start. I've been working a couple of old friends, Trevor Morris on 'Olympus Has Fallen' and Ramin Djwadi on 'Pacific Rim', these have been more electronic arrangements and ideas as opposed to ambient work. I've also worked on 'The Secret Life of Walter Mitty' with Teddy Shapiro, plus a few bits for Henry Jackman on 'Captain Phillips' and Chris Bacon's score for 'Bates Motel'. The important thing is at times like that everybody respects everyone else's projects and space... it's not like I have a team around me, I can't and won't delegate which makes things diffiicult at times. A nice Margaux helps though...

[melwesson.com]
[Mel Wesson - IMDb]

Sunday, February 17, 2013

Project Genesis, a sound design story

by Matteo Milani - U.S.O. Project, 2013


What if the history books have it wrong? What if the tool is the master of its maker? Did Mac create Man?  Project Genesis, a short film by Alessio Fava about a world populated only by old Apple computers, has arrived! Cult of Mac presented the International Premiere of the short film.


Project Genesis // shortfilm sub ita from project genesis on Vimeo.

To create a world populated by computer, it was essential to define the role of sound during the process of writing the script, to support the story. To make the actors credible and alive - like the old Apple Macintosh Classic and Lisa - each one with its own details, it was a thrilling creative work. During the first meetings with director Alessio Fava we looked for the "voice" of each character - of course with a west-coast American accent. For example - thanks to my friend Ann Kroeber @ soundmountain.com - we chose actor Michael Navarra for the role of the CEO "ACME I", not just for his timbre, but especially for his outstanding acting skills. Since all computers are equipped only with an eye on their front screen - animated by the team of artists who supervised the visual effects - we needed incisive voices, with a lot of inflections, but somehow also enigmatic. A graphic equalizer which moves in synchronization with the voice increases the intensity of the expressions of the characters: in this case the production method was identical to that of an animated film, where the dialog is recorded before the “lip-sync” animation process.

By contrast, I have treated the original dialog without digital tools, instead using the technique of "worldizing" broadcasting FM sentences of each actor to old radio receivers, simulating the sound as if it came from the chassis of the computer, more precisely from the speaker located inside the Mac. 

The goal has always been - according to Alessio - to get results in a vintage and analog "flavor", anything but futuristic. To invent the sound of the propulsion system that allows these intelligent machines to move on the location of the short film, I used an electric razor as a main source, which with its continuous vibration resonates in the cavity of a metal lid, picked up by a contact microphone to have a more defined and organic timbre without interference from the surrounding environment (it’s an homage to Ben Burtt, who did it previously for “Star Wars: Episode I The Phantom Menace” to create the sound of the hovering battle tanks). The result is a "concert" of different signals, which modulate at different pitches, placed in different spatial positions. Kyma also was used to generate ambiences and backgrounds throughout the film. To make the presence of each character more vibrant - each one animated with the “stop motion” technique - and further emphasize every action on the screen, I added not only individual mechanical noises to emulate the chronological age of each computer, but also noises picked up the activity of a failing hard drive. Working alongside composers Giovanni Dettori and Lorenzo Dal Ri, we have achieved an excellent balance between dialogue, sound effects and musical contributions to an immersive sound continuum, in both 5.1 surround and stereo format. 


Project Genesis // Creating voices from project genesis on Vimeo.

[projectgenesismovie.com]
[facebook.com/followgenesis]
[twitter.com/followgenesis]

Saturday, October 13, 2012

Randy Thom @ VIEW Conference 2011


On Friday, Oct. 28, 2011 VIEW Conference hosted a Master Class with Randy Thom, Director of Sound Design at Skywalker Sound. He is a firm believer that the sooner the sound designer is involved in the pre-production, the better the story can be told. Randy illustrated how sound can shape a film, talking about how doors can be opened to sound. He also shared clips from movies where this kind of early collaboration has happened. Here's an excerpt of his talk during the workshop.

Sound as a full collaborator to make better films 

Alan Splet, Walter Murch and Ben Burtt where the three people who lived within about 20 miles each other near San Francisco who really brought a new revolution into American film sound during 70s. I was lucky enough to work with all three of them and “steal” some of their best ideas. One of the first things that you learn as a sound designer is not think to literally about sound, so one aspect of training your ear is to interpret sound in emotional terms. Subjectivity in filmmaking is a playground for sound: when the audience understand that they don’t figure out consciously, but what they seeing and hearing is being filtered through a character’s or filmmaker’s point of view in a subjective way. Very often working on a sequence for a film what you want to do is think of how you want the sound to make people feel and you analyze what it is about that sound makes you feel certain way and you go looking for sounds or raw material that have those qualities. 




Apocalypse Now

If there was ever a film where sound and image were treated more or less equally and allow to affect each other certainly is Apocalypse now. The first sound that you hear in the film - before any music or any dialogue - is a very odd, electronically synthesized helicopter sound - the Ghost Helicopter. Captain Willard is hearing the memory of the helicopter that he has. What you’re listening to is this guy’s brain. He’s remembering things, he’s hallucinating, he’s dreaming, he’s drunk and under the influence of drugs, he’s listening to his brain operate. The opening sequence is the launching point for all story, immediately the audience is put in a frame of mind that anything can happen, this is going to be a very strange ride. As he stands at the window looking outside he might heard a little fly buzzing: it took me a week to record that fly (laugh). At first he hears - and we hear - the sound of Saigon outside (car horns, Vespas, police whistles). Those sounds morph into the sound of the jungle: each one of those individual city sounds turns into specific jungle sound. Physically for the all sequence he’s still in his hotel room, but in his mind he moves back into the jungle.


Once upon a time in the West 

Sergio Leone decided early on that they will record all the music for this movie before they started shooting the film and they used the music during the shooting to help the actors and essentially to inform how the film is going to be shot. They were struggling how to make music and sound working together before shooting the sequence. Ennio Morricone, the composer, happened to go to a musique concrete concert - a genre involved using real world sounds, rather than traditional musical instruments - by a guy who played a latter, banging and scraping on it. Then he called Leone and said: “There’s should be no conventional music at all in the beginning of the film: instead you perhaps should shoot around the sound effects.” Leone went shooting the sequence thinking about how the sound of this little train station were going to work in the storytelling.

I think that’s a tragedy that very few studios these days will have the guts to allow a filmmaker to do a sequence like that. They say: “People are going to be bored, we have to fill up with uptempo music through the all thing”. Some budget filmmaking these days is “fear based”, it’s not an attempt to do something new, interesting and unusual, to open people imagination. It’s an attempt to avoid boring people, which is never a good motivation in art.

One of the things that these two scenes have in common is a very strong sense of point of view. Camera angle are very important to sound, believe it or not. The kind of shot, where an actor looking at nothing in particular, is another open door for creative, subjective sound, because the audience knows intuitively that they’re going inside the character’s head. It’s an open door for sound designers to put almost any kind of sound that we want. Having some ambiguity or mystery about the visual image makes it easier for me to do something useful with the sound. Extreme close-ups get across the idea of subjectivity. Long duration shot opens the door for sound, too. The character’s closing eyes is also an opportunity for sound to do something interesting (imaging, remembering...). The most difficult kind of shot is a brightly-lit medium shot, because you’re not focusing on anything in particular and there’s no mystery there, there’s nothing that invites the ear to help figure out what’s going on.

Another element they’ve in common is sparse dialogue. I’m certainly not against dialogue in film - dialogue will always has a role. Dialogue and sound design generally don’t go well together, because there’s something about the human voice that the human ear want to attempt to.
If someone is talking - no matter how hard a director ask me to try to push sound effects during that sequence - it will distract you from the dialogue, which the audience is trying to hear. The way to solve that problem is to design the sequence in a way that there are moments for the dialogue and moments for sound effects. A compromised has to be made, you can’t as a filmmaker try to fire all your bullets at the same time, it’s not going to work. One category of sound tends to dominate at a time - it’s dialogue, it’s music or it’s sound effects. Another bad tendency in contemporary filmmaking is to try to set it up so that all three dominates simultaneously, it will never works. Lazy filmmakers will just call the composer: “make some very strange music telling the audience that this is a very strange place”. As a sound designer you try to do things and variations in the same way a music composer, to use sound in pure musical way: tempo, harmony and rhythm to evoke emotion. Think about what elements in a set could generate the sound useful for the storytelling: this will be more powerful at the end, it’s not a decoration, that’s a very organic way of telling the audience “this is a very strange place”.



Sound-friendly scripts

Most writers are obsessed with words, and they tend to think words should dominate every sequence, with wall-to-wall dialogue.
Filmmakers simply don’t think about how to use sound in that way before start shooting the sequence. Think about what the characters hear. Think about how the things they hear affects them and how character changes over time. I often found people who come from visual/light background - which David Lynch did - have very interesting sound ideas. He demands you to be creative all the time.

Another thing I told to directors is: during rehearsal in a live action scene, play with your actors in terms to find things in the space that can make sound that will be useful to the story.
As a sound designer, try to imagine ways that sound could playing in an interesting but organic, truthful way to help the storytelling in the sequence. Try to think to other powerful sounds that the audience doesn’t expect. Part of your job as a sound person - I think - is to help the director make the best film possible. If you have interesting ideas - and you should have them - about the way of film shooting that allow you to do something you couldn’t do otherwise, of course you should talk about it.


Sound for Animation 

Thanks to big aesthetic jumps in animation, more contemporary animation directors want movies should sound like a live-action movie. For “How to Train Your Dragon” I come up very early on with some speculative vocalizations for the dragons that will help the animators to animate to those elements. I tried to use real-world animal sounds - tiger growls, elephant, whales, goats, camel, dogs - to cover a wide range of emotions, allowing sound to influence the animation. The challenge is how to make the transition from one to another, which needs a lot of work and experimentation with pitch-changing techniques.




Sound Mixing 

Mix is about to choose right or most powerful sound in any given moment. In a moment when you need to hear the dialogue you try to artfully lower or eliminate sets of other sounds that are competing with the dialogue at that moment. Space can make sounds useful to the story: there are a lot of others tricks like moving the sound effects and the music into the side loudspeakers and have the voices mostly come out from the center speaker, which make a little bit easier to understand the lines of dialogue. Sound is more powerful if comes from a place we doesn't expect. You need to think of sound in terms of spectrum and frequencies and tailor those for a given moment.


Related Post:

Monday, August 27, 2012

An interview with Hamilton Sterling

by Matteo Milani - U.S.O. Project, 2012 

I am happy to continue our series of interviews with Hamilton Sterling - sound designer, supervising sound editor, effects editor, and mixer who has worked on The Dark Knight, War of the Worlds, and Master and Commander: The Far Side of the World, as well as many independent films. He recently cut sound effects on MIB3 and The Host, and worked on Terrence Malick’s To the Wonder, and The Tree of Life. Hamilton was the supervising sound editor, sound designer, and re-recording mixer on Tomorrow You’re Gone by David Jacobson, and has edited sound on the films of P.T. Anderson, Christopher Guest, Andrew Dominick, and Steven Spielberg. To date he has worked on over seventy-nine feature films. 


Recording a Demolition Derby (photo: Michael Dressel)


Matteo Milani: Thanks for your time Hamilton! First of all, could you tell me a bit about your education and musical background?

Hamilton Sterling: I come from a musical family. My mother and aunt sang four, five, and six part harmony by ear. They performed as the Silhouettes on KDKA radio in Pittsburgh, Pennsylvania in the 1930’s. Both were very encouraging of my early musical interests. My aunt, who worked at the local newspaper, often brought home fascinating music: György Ligeti, George Crumb, music from all over the world. I heard the soundtrack to Fellini Satyricon before I ever saw the film. My mother was a jazz fan, and when I was a boy, would take me to listen to local groups. 

In high school I played electric bass in the jazz band and upright bass in orchestra. The musician’s union put together an all-star big band for high school students in which I played, and I also performed in the Allstate Orchestra, becoming principal bassist my senior year. I played my first jazz gig when I was sixteen years old, and entered Arizona State University on a four-year music scholarship, graduating with a BA in jazz and classical performance. 

From a social context, I also benefited from the cold war belief that the arts held importance, and that America had to out-compete the former Soviet Union in creative endeavors. Music education in elementary school and high school was excellent and generously funded. Unfortunately, for the arts and young artists, those days are gone. 


MM: Does your experience as a musician help you in your career in film sound? 

HS: I think a sense of rhythm, melody, and harmony is essential in being able to make interesting sound for film. Recent studies show that human beings are wired for music. In studing jazz, the adage of “learn the music theory, then forget it” allows the right brain freedom to improvise from knowledge. I’ve come to feel that cutting and layering sounds is a slow-motion version of improvisation. 


MM: How did you enter the movie industry? 

HS: Alongside my musical activity, I became obsessed with films. Stanley Kubrick’s 2001: A Space Odyssey became the impulse for much of my early creative life. I saw the film for the first time when I was ten years old. It brought to me an interest in modern classical music, archeology, cosmology, astronomy, AI, cinematography, and special effects. It also appealed to my budding existentialism. That any one object of art could do so much was, to a young mind, amazing. 

It may seem hard to believe, but public television at that time played films by Antonioni, Godard, Fellini, Losey, and Bergman, and the art house cinemas were still going strong. I began making short films in the summer vacations between school from the money I made playing music. When I came to Los Angeles in the early 1980s, sound seemed a natural fit. I began by editing documentaries, Warren Miller ski films, industrial films, until I got my first sound effects editing work on Alan Rudolph’s Trouble in Mind. 


MM: Choosing the right sound(s) to picture. An art form? 

HS: Because sound editing began as a technical blue-collar job, many people had the impression that what we did was nothing special – “A monkey could do it” was the often-heard refrain. Here in America, we never developed a militant artistic union, an aesthetic of labor, if you like. Of course, because many of the corporate films being made today are empty of artistic merit, to say nothing of merited thought, it’s no wonder that art and labor are still dirty words. Choosing the right sound is an artistic, and surprising moral endevour. Thinking back on choosing sounds for films I did twenty-five years ago, if you had a scene in a rough cityscape, one might choose a black voice yelling in the street. The effect of that implying threat (at least to a certain part of the population). And that is a choice that can subtly further social injustice. I’m not saying that an artist should proscribe their work, but one has to very conscious of one’s choices, because in mass entertainments, those choices may reach millions of people, and they have consequences. 


MM: You've frequently shared your nominations with Richard King, Christopher Flick and Michael W. Mitchell. How do you guys collaborate on projects? 

HS: When I worked for Richard King, I did sound design and sound effects editing. Occasionally, I did sound effects recording. The frog rain in Magnolia was re-recorded against the reflections of a cliff in the Angeles National Forest. We set up two speakers facing away from the microphones, and slowly rotated the speakers toward mic, giving the playback the effect of distant frogs falling toward us. Eric Potter and I also put a speaker in a pickup truck to record a playback of previously sampled frog elements. Eric put the truck in neutral and steered into the quiet, distant valley. The sound was incredibly bizarre, and I ruined the heads and tails of multiple takes by laughing. It’s always fun to get out into the world to record, and Richard loves to record. 

Chris Flick did all the programming and cutting of the foley, and we conferred with him on elements that effects needed help with. As to the sound effects editing, Michael Mitchell and I did a lot of heavy lifting. Richard cuts sound effects as well, and in this day and age, I admire him for it. 


MM: Would you like to explain your role when working with the other members of the sound editorial?  

HS: When I’m not supervising, I try to communicate very specifically with the supervisor and my fellow editors: what are the stage delivery requirements, the predub breakdowns, or whether there will even be predubs, who is doing what in terms of special design. Hopefully, the supervisor has been able to spot the film with the director. If there is little in the way of specific information, I get a sense of the aesthetics of the director from what’s on the screen. If what you see is a pack of cliches, you know what to expect. If there are few cliches, you have reason for hope. At the end of the day, your work is only as good as the director’s vision, and their courage to take risks. 


On the Cary Grant stage at Sony on Morning (photo: Leland Orser)


MM: What are the musical tools you use to boost your sound designing workflow? 

HS: A number of years ago I purchased a Kyma sound design system from Symbolic Sound. It is very useful in producing unique sounds. It’s always inspiring. I also use my old Kurzweil K2000 as a midi controller, a Haken Continumm fingerboard, and a PC2R. In studio I use Millennia mic preamps. For field recording I use Schoeps mics and Sound Devices mixers as well as different contact, ribbon, and dynamic mics to gather my sounds. My bass is fitted with a midi pickup that I also use through an Axon to trigger my samples. 


MM: Sound processing: can you give us a description of your studio gear? 

HS: I use Pro Tools HD3, Genelec 5.1 speakers with a MultiMax monitor, and many plug-ins. For picture I project HD through a Decklink card to a nine-foot screen. Aside from Kyma, Haken Continuum Fingerboard, Kurzweil, and Axon, I have on occasion used Beat Detective in Pro Tools to place rhythmic structure onto multiple effects, and Melodyne to re-engineer animal vocals. I just started using Battery as a sampler. Altiverb, Pitch ‘N Time, Lowender, and GRM tools are staples. 


MM: Can you reveal to us a "making of" of a very special sound effect(s) or a sound sequence? 

HS: I’m very proud of the storm sequence in Master and Commander: The Far Side of the World. Making the scene dynamic given the similarity of frequencies both in the water and the wind was an interesting problem. First I began by cataloguing the ocean sounds into frequency ranges from low to high. I catalogued the wind in a similar way. At the time, Warner Bros. had terrible editorial rooms that had not been updated since the 1950s. Not only was there no surround system, but there was no wall treatment of any kind. Anyone who has had to edit endless water and ocean effects knows that in a box-like room with hard surfaces, audio hallucinations in the white noise of the waves can produce boat engines that aren’t there and other weird effects. So I brought sound blankets from home and hammered them into the walls. I scavenged an extra pair of speakers from an adjoining room and built myself a primitive surround system. I cut the sequence in 5.1 tracks that would mirror the mixing console and internally panned and level-set everything. Because the water visual effects were actual layered shots of waves, they changed constantly. But the first time I cut the sequence, I really liked the rhythms I had found. As the sequence changed, I was determined to keep this poetic kind of rhythm, so instead of just cutting up the tracks in conforming them, I took the time to find new rhythms and create the sequence anew each time it changed. I was very tough minded in this approach. Fortunately, I was given the time to do this – something that is unique to this day. I then cut the hard effects in the same 5.1 style and processed the siren’s call of the wind in the rigging through Kyma. When the edit went to the stage in this form, the mixers worked on it for a couple of days, trying to tame it. Much to their kindness, they told me they decided to put it all back the way I had originally laid it out because it had a life to it that their smoothing of the rough edges took away. That’s the way the sequence was released. 


MM: What do you regard as your most important credits in your career thus far? 

HS: The Tree of Life and The Assassination of Jesse James by the Coward Robert Ford are my two favorite films. They’re closest to the feelings I felt as a young man introduced to the great European cinema: thoughtful, unsentimental, mysterious. They capture something of eternity. 


MM: How do you get involved with the movie “The Tree of Life”? What kind of approach did you take on foley? 

HS: I have known the sound supervisor, Craig Berkey, for many years. Erik Adahl had hired me on Transformers: Revenge of the Fallen, and mentioned me to Craig. We fell back together again, which is the way of the film business. Andy Malcolm of Footsteps Studios walked the foley. Realism and proper perspective (using multiple mics) is very important. Because Mr. Malick often uses non-synchronous production takes, foley is used to ground the characters within the scene. It becomes another part of his pallet. When it is absent, that too becomes a color. 


MM: Can you describe how some of those sounds were accomplished? 

HS: Andy originally used his house as a foley studio. It’s out in the middle of the Canadian wilderness – forty-five minutes from Toronto. The stairs really creak, he never sweeps his kitchen. It’s all real. (Just kidding about the kitchen.) Now he has a fabulous studio a stone’s throw from his house, so you get the best of both. 


MM: How was the communication with the director and the rest of the team? 

HS: I was only on stage for a few hours during our first temp mix. But I was struck by Mr. Malick’s graciousness. I truely admire his work, and have since I first saw Days of Heaven as a youth. 


MM: To mix "in the box" in sound editorial before the final dub: what are its pros and cons? 

HS: Unfortunately, schedules now seem to only allow the sound effects to be mixed in-the-box. Even on the most well-financed films, mixing in-the-box is now common. On Knight and Day (James Mangold), we pre-mixed all of the sound effects and ambiences into 5.1 groups and kept them virtual. We had a couple of weeks to adjust these pre-mixes on the mixing stage, as the console on the Cary Grant stage at Sony could mirror Pro Tools. But the number of tracks and the constant changes necessitated having to continually re-mix added elements in-the-box. I recently did some work on MIB3, and at least for the temp mix, the effects were pre-mixed in-the-box, then taken to the stage for final adjustment. Keeping a somewhat traditional separation of elements is helpful for conforming, as well as giving the sound effects mixer creative input. If you set your editing room up correctly, it can work out quite well. Of course, sound editors are not being paid as mixers, so there are ways in which this situation is financially disadvantageous. But it is creatively rewarding. For independent films, the future is here. With the track counts of the new Pro Tools HDX cards, traditionnal mixing facilities will have an increasingly difficult time staying afloat. Unfortunately, many fine mixers will as well. 


MM: A networked environment: can you describe the importance of a client/server architecture in sound post production for a feature film? 

HS: It’s great to have. On Knight and Day we were editing at another facility before we moved to Sony for the mix. Sony has a nice server system for moving your work to and from colleagues as well as mixing stages. Structuring the folder architecture on the server is extremely important. Knowing exactly what elements have come from the mixing stage, what needs to be updated, what needs to be mixed, may seem simple, but with multiple versions, competing creative interests, and huge amounts of data, organization and terminology is paramount. 


MM: Sound effects editing for multichannel-surround: what are you spazialization techniques? 

HS: I edit for the 5.1 pre-mixes. When I do have to spread an effect, I’ll use a little delay, reverb, or the Waves PS22. I’ve recently begun using the Schoeps free DMS plug-in for three channel field recording that decodes to 5.0 surround. I love the Schoeps plug-in. Now I record all my sounds on three channels and decode to 5.0. Even simple sounds, like a light switch, pick up the character of the room. It’s a facinating way of creating a feeling, using these simple multi-channel sounds. If the simple sound creates an interesting space, I’ll work backward, and using Altiverb, try to get the rest of the sounds of the scene into that same environment. Of course if it doesn’t work, you still have the mono or MS stereo recording. 


MM: You made a film in the late ‘90s. Did you do your own sound? 

HS: I supervised it and cut, but I had a number of wonderful friends from the sound editing and mixing worlds who helped me to complete it. Because of current events, I decided to prepare a new version of the film, Faith of Our Fathers, for Blu-ray and DVD, and re-construct the sound for 5.1. As I began to re-assemble all the elements, I realized that we who started in the business in the mid-1980s lived through a radical transition in our work. At the time magnetic film was all there was. But by the time I shot Faith of Our Fathers in 1991, the digital world was just beginning. Faith was originally shot on 16mm film with a 1:1.85 ground glass for theatrical blow-up to 35mm. All the dailies were 16mm. I had obtained from a friend, one of the first Sony D10 Pro DAT recorders in the country. It was strictly grey-market. I thought I could mix and record the production sound to one channel and put a 60 cycle pilot tone on the other so that when transfering to magnetic stock (both 16mm and later 35mm) the transfer machine (“dubber” we called it) would stay in sync. So, over a long period of time, friends and I sunk and coded the 16mm dailies. I cut the picture, and when it was time to prepare the track for 35mm mixing in 1995, I used a 16mm to 35mm synchronizer to phase the new 35mm dialogue to the 16mm worktrack. The dialogue was cut on mag. The backgrounds were a combination of 24 track two-inch and DA-88s (the bane of all mixers at the time). And most of the sound effects were cut on an early version of Pro Tools which were then transfered to DA88. When I decided to do my 5.1 re-mix, I had 35mm mag to transfer to Pro Tools, DA88s (I still have one of those boat anchors which work!), and DATS with original production and music. When I think that the process involved 16mm mag, 35mm mag, Pro Tools, 24 Track, DAT, and DA88s, it becomes evident that the transition from analog to digital was quite messy. The other shocking thing is that I was able to finance my film on a sound editor’s salary (which is the reason it took so long to complete). 


MM: What are your thoughts on the boundaries between music and sound design?

HS: Having recently released Migration, I can tell you that creating a 5.1 programmatic musical soundscape is a wonderful artistic process. Combining a purely aural narrative with the abstraction of music and processed effects blurs the creative experience. I don’t mean just adding a sound effect to a music track, I mean creating the entire living thing as one artistic statement. There is a universe of possibility in the soundscape form, and because of my musical life, the addition of ambiences and effects to create emotion is a fullfillment of who I am. Other examples of soundscapes that I like can be found in the plays of Romeo Castellucci’s Socìetas Raffaello Sanzio and the Wooster Theater Group, both of which I find inspiring. As to the boundaries between music and sound design in film, I would say they have been nearly erased. I just completed the film Tomorrow You’re Gone, with Michelle Monaghan and Stephen Dorff. Kyma was used extensively in creating a very musical soundscape in which to set the traditional effects. 


Recording 5.1 ambience for Tomorrow You're Gone (photo: Cris Lombardi)


MM: About your album releases: do you think that detectable technical processes are an integral aspect of the composition’s overall aesthetic? Is it important in this composition that the listener is aware of the technical processes? 

HS: The album Rise and Fall is made mostly of live loop improvisations featuring fretless bass, acoustic bass guitar, and midi-following synths and effects. It grew out of musical feelings, a very simple midi-synth/live stereo mix chain, and the need to not multi-track, or manipulate the live performance. In that respect, technical qualities like midi delay, or tracking annomilies by the Axon pitch controller, were of secondary importance to the spontanious capture of the music. No meta statement should be implied from these technically primative recordings, other than they were all done with as little post-production as possible. As for Migration, the soundscape album I created with Grammy-winning musician Jimmy Haslip, that is a piece that was conceived and composed for surround. It’s feelings and scope are purposely cinematic. 


MM: What's the most important tip you've ever received regarding sound? 

HS: Often on big films, the amount of audio ideas brought to the process can be overwhelming. Sitting on the mixing stage listening to Steven Spielberg, Michael Kahn, and John Williams discuss how best to tell the story of a scene on War of the Worlds, I was struck by their equanimity toward music and sound effects. For them, it is all about story. Their years together have created a language around this idea. What tells the story in a particular moment, and what elements do you have available to do that? An agreement on what the story is allowed them to know what to emphasize on the track. Other filmmakers see story differently, or dissect story as myth and power, and therefore take a very different approach. I love the sound of Jean-Luc Godard’s films because it is a featured element in the argument. Film Socialisme begins with a line-up tone that moves from speaker to speaker around a 5.1 mix. It introduces his dialectic between sound and picture within the contemporary structure of multi-track films. It’s brilliant and very funny. 


MM: What is the most important topic you would want to talk about to make post sound better? 

HS: Forcing USA corporations, either by massive tax penalties or heavy import tariffs, to hire the workers in their own country. Too many of our sound jobs are being outsourced. Germany has good unions, pays its workers well, and has an export rate second to none. The old saw that in-country labor produces products that can’t compete is obviously not true: Germany has a 7% trade surplus. The sad truth is that USA corporate profit is at an all time high, CEO salaries are at an all time high, and too many people are unemployed. Corporate contempt for basic decency is the primary problem at this moment in history. It will eventually change, one way or another. 


MM: Do you have any advice for anyone who is interested in a career in the sound dept? 

HS: With the technology of audio, music, and picture in an ever-increasing cascade toward the infinitely complex, having the time to learn the programs, plug-ins, hardware, softward, picture formats, and optimal work-flow process, is itself becomming a full-time job. Having the mental space to discover why you want to do it, and what doing it even means, to you and to society, is something that young people should consider. This work used to be the wild west. Most of it, for the time being, now sits inside the corporate world. That is not a world that should be perpetuated. So then it becomes about making art, with no potentially viable means of making a living. Last quarter Migration streamed 2500 times and made 3 cents. So is this a risk you are willing to take? Do you see the world differently, and have something to say about it? If so – and now I’ll paraphrase Stanley Kubrick’s advice to young filmmakers – “Get a camera, as soon as you can, and start making films.”... or music, or soundscapes, or installation art. If you are meant to work on a handful of great films in your career, somehow, with luck, you will make it happen. 


MM: Silence is mentioned a lot when discussing sound. What was your approach in its usage? 

HS: As John Cage pointed out, silence is never truly silent. But one must be silent in order to listen. 


Thursday, April 19, 2012

Unseen Noises [USO002]

24-bit/48kHz Royalty-free Sound Design Collection



Unseen Noises is the second sound effects bundle created by sound designers and electronic composers Matteo Milani and Federico Placidi (aka Unidentified Sound Object - U.S.O. Project).

Electromagnetic informations are invisible and omnipresent. In every city, especially the big ones, an infinite number of electromagnetic waves is hidden: we can't hear them, but they're everywhere! We explored this invisible noise pollution transducing electromagnetic fields into audio signals with a telephone pickup: it acts like a radio antenna for hum and weird electromagnetic noises.

We plugged it into a SONOSAX SX-R4 recorder, moving it close to electrical devices - like a stethoscope - to locate interesting and curious sounds, just like LCD television, internet antennas, lighting systems, transformers, game consoles, tablet, electronic security systems, scanners, computer monitors and hard-drives, printers, navigation systems, fax machines...

All of the audio files have been embedded with metadata for detailed and accurate searches in your asset management software.

As the previous library, this collection has not been peak normalized, but loudness normalized. Through loudness normalization, the gain of a signal is modified so that the signal’s loudness level equals -23 LUFS. Loudness normalization helps us solve the problem where we wish to balance the loudness level of multiple sound files.

Here is what you get in "Unseen Noises":
  • Stereo Files (40 items)
  • Tab-delimited file (.txt)
  • Excel spreadsheet (.xls)
  • License Agreement (.pdf)
  • Artwork (.jpg)

Audio Format: Broadcast Wave Files (.wav)
Sample Rate: 48 kHz
Bit Depth: 24-bit
Size: 2.43 GB
Download size is 1.9 GB (compressed .rar archive)

Price: $ 30 - via PayPal

We make every effort to email the archive to you within 48 hours after you place your order. Please contact us if you have not received your product within this time (*).

(*) Please note that the products directly for sale from this website are not automatically downloaded when you enable the purchase. We email you the download link upon receipt of purchase notification from PayPal. If you do not find it, please look into your SPAM or JUNK folder because it might have been blocked by your e-mail system.

Friday, March 09, 2012

GRM pt.2: the birth of a concept

Daniel Teruggi wrote an interesting article about the Syter system at INA - GRM in the booklet for Archives GRM (CD 4). This whole CD is comprised of works created through Syter.

[via sonoloco.com]

"To mark and celebrate the thirty years of the INA (Institut National de l'Audiovisuel), the GRM (Groupe des Recheches Musicales) has chosen to bring together an exceptional set of five compact discs, illustrating some of its most remarkable musical archives. These original works, which are often previously unpublished or have been dispersed throughout a host of other publications, are important because of the originality and audacity they testify to in the second half of the 20° century. Some listeners will be pleased to see that there are a number of illustrious composers here who, in the 1950s, frequented the studio of Pierre Schaeffer, and others will discover numerous musicians whose enthusiasm enabled this innovative musical genre to last throughout the following decades."
Emmanuel Hoog, président directeur générale de l'Ina

Daniel Teruggi - The time of real time

From the very beginning, music, whether vocal or instrumental, improvised or written, and up until the invention ol recording processes, was listened to at the precise moment it was produced. The twentieth century changed all that, First of all with the appearance of recording media, which made it possible to listen to sound in a place and at a time other than those at which it was originally produced; then by the widespread use of electricity, which made it possible to invent new instruments and new ways of imagining and making music. Concrete music, electronic music, electroacoustic music, acousmatic music or contemporary electronic musics are all testimony to the same ambition: using electrical, electronic and computer-based technologies to invent the sounds of music. The invention of sounds is the invention of new forms of music, of new ways of looking at music, and is the logical consequence of the new opportunities that technology continues to provide us with. Musicians began to use computer systems a long time ago (1958) in order to synthesise sounds and to develop computer programmes that would enable them to combine sounds into musical works. Progressively it became possible to record these sounds, to process them or to hybridise them with synthetic sounds.
Musical computer technology did not develop fast and was dependent on the way processors and data storage systems evolved; in 1958, a large computer in a research centre was necessary in order to produce a simple synthesised melody, which it was not even possible to record in the memory. These initial technical difficulties brought about the appearance of two concepts which could be described in a historical perspective, but which are often presented as if the were antagonistic: deferred time and real time. Deferred time described the way that the first computer systems were unable to produce an instantaneous result.
Between the moment at which the intention was expressed and the moment when its result become an audible phenomenon, there was always a certain lapse of time.
The user programmed a sound using software, defining its various parameters and timbre, and then the computer calculated the sound and, depending on the complexity of the calculation, produced the result ofter a given interval. The listening time was deferred with respect to intention time.
It was logical that the next technological objective was real time, a concept that describes the possibility of hearing a sound at precisely the some time as the intention to make it is expressed.
Moving over to real time required changes to the command tools. Deferred time was the result of a programming system whereby the user defined, using written language, the result he wished to obtain; moving over to real time made it possible to define the intentions instantaneously and to modify the result as it was being listened to.
Now, most sound production and generation systems work in real time, enabling the user, thanks to various interaction tools (keyboards, mice, screens) to control and modify the sounds created and heard. Nevertheless, in the field of musical creation, and for a relatively long time, this technological evolution was opposed on methodological grounds. Real time obliges the operator to act and react, depending on the result, in a way that is similar to that of the instrumentalist. For many composers, deferred time, because it separated the moment of conception from the moment of listening, created a distance that was necessary for reflection, a situation that is similar to instrumental composition, between the writing of a piece on paper, and its being played.

[Daniel Teruggi @ Sonic Acts 2010 - courtesy Rosa Menkman]

  
Deferred time and real time in the GRM

At the beginning of the 1970s, the Groupe de Recherches Musicales began to experiment using computer technologies. At the time, the Group already had 20 years of experience, a major repertoire of musical works, a tradition for profound reflection on music and perception as well as innovative technological research. Little by little, therefore, work was undertaken to look at the possibilities that this new domain, which was already strong in the United States, could offer in France, where it was comparatively little known. Two projects were to follow one another, and then coexist, between 1975 and 1993: the first, from 1975 to 1987, concerned the development of deferred time sound processing tools, the "Studio 123 software programmes", developments that are dealt with in CD 3 of the GRM Archives set. The second project, the Syter system was a major technological development for musical computer programming, so original that its impact can still be felt in the development of processing tools today.
These two projects were vitally important in opening electroacoustic music up to composers from the instrumental world. The main successes of these two projects were to bring electroacoustic music out of the studio, making computer technology accessible, without needing programming skills, and making processing reliable and reproducible. The range of things it was possible to do to sound was considerably widened, using original and unheard of sound processing techniques. These two projects were a unique period for the GRM, the studios opened up to welcome composers with other ideas, concepts and points of view, the dialogue was rich and fruitful, and the understanding and analysis of the music being written there were enhanced.

The Syter project 

With the advent of computer technology, the first idea was to imagine a parametric control of machines using digital tools. For example, synthesisers, while remaining analogue in the way that the sound is generated, could be controlled by digital systems that would provide o greater precision in terms of frequency that traditional rotary buttons. It was thus that the first Syter was born, an acronym for: Synthése en temps réel (real time synthesis), and the objective of which was to build up a digital synthesis system based on a set of oscillators, controlled in real time by specialised gesture-based access or by external signals.
The first prototype that was built was relatively simple, since its only function was to control, in real time, the movements of a sound source between a number of loudspeakers. This prototype, with its delicate control system and laborious programming, was used in concert on 16 March 1977 for the creation of Cristal by Francois Bayle.
The designer of this tool and of its following versions was Jean-François Allouis, an engineer who arrived at the GRM in 1974, and who was fascinated by the potential of computer technology as applied to sound and music, and who had an uncanny inventiveness when it came to finding solutions to new problems and designing original systems. For this first concert, the acronym Syter become: Systéme temps réel (real time system), and was the starting point for a whole 5-year period of development during which Jean-François Allouis contributed to the setting up of the first GRM computer, oversaw the implementation of the deferred time processing system, built the Syter real-time sound processor and the input and output converters, developed programming software for the processor, built one of the first interactive real-time parameter control systems and programmed the first processing tools. In conjunction with computer scientist Jean-Yves Bernier and computer technician Richard Bulski, he needed to build and rebuild the system several times before the first full system was complete, in 1984. The system underwent very few modifications and additions, subsequent to that. Eight systems were built and sold, up until 1988. The software continued to evolve up until 1989, in particular thanks to the impetus of Hugues Vinet, who designed a digital mixing tool, providing the system with all the functions of a Studio. Two systems were in operation at the GRM until 1995, and around 100 works were composed in part or in whole using the system.

Related Posts: 

Wednesday, January 04, 2012

Ben Burtt about the genesis of the TIE fighter sounds

[an excerpt from The Sounds of Star Wars - © Chronicle Books]

The genesis of the TIE fighter sounds is another story, one that began with Ben Burtt's search for the laser gun effect.



Originally, George Lucas had seen a British documentary on PBS about the Battle of Stalingrad in World War II and had noted that the firing sound of some strange Nazi rockets was quite weird and interesting. Lucas mentioned that it might make a great sound for the laser gun and Burtt managed to find a copy of the documentary. He then set about finding sources that could emulate that sound. Luckily, at Twentieth Century Fox Studios, Don Hall let Burtt go through the Fox sound library, where he found recordings of some elephants that had been done for an Errol Flynn movie The Roots of Heaven [1958]. In that film, elephants stampeded and bellowed. with an almost shrieking sound (the same sounds were used for the dinosaurs in Journey to the Center of the Earth).
After making a copy of that recording, Burtt realized that when he slowed it down and stretched it out, he ended up with a sound similar to the rocket one in the PBS documentary.


[The Lost World - 1960]

But it wasn't quite right, so Burtt took the sound of the elephant and mixed it with pass-bys he'd recorded of cars during a rainstorm as they sped through puddles in front of a motel where he was staying (a pass-by is when a vehicle comes toward the viewer, passes by, and then speeds away).
"Swoosh, the car would come by, and you heard this car plowing through the water," he says. "I took that sound still thinking that I was making a laser of some kind." The key "a-ha" moment occurred during temp track auditions, as shots started coming in from ILM of the gunport sequence.
"When we did temp mixes and played it back for the crew at Park Way, I would take advantage of the fresh audience, because the editors hadn't heard anything with sound," Burtt explains.
"The gunport sequence came along with the first trial shots of actual TIEs in motion. There was pressure to just get some temporary sound in for a screening, so I grabbed a random set of sounds I liked and cut in a different one each time a TIE fighter zoomed by," continues Burtt. "One sound was the elephant shriek, the next one was a slowed-down World War II warbird, the next a processed jet or rocket."
After the screening was over, the only talk in the room was about that elephant swoosh sound. "That was the greatest sound for those ships you could have possibly picked!" Of course, I was saying, "Oh yeah, of course". I’d really put it in because I had no other altemative, but it got great reviews, so naturally it became the sound ofthe TIE fighters."
"ln World War II, the super dive bombers had an artificially created siren wail created by air ducts," explains Joe Johnston, visual effects art director. "They didn't serve any purpose except to create this noise, which would terrify people. It was intended that the TIE should achieve the same effect."

Monday, October 03, 2011

Richard Beggs @ P.A.I.F.F.

From Apocalypse Now to Twixt: Sound Design with Richard Beggs

I have a masters in painting and it remains the motif of my work. I am an audio painter. #

I work thematically I like sounds that are motific. #

I have a very idiosyncratic work style... I like to be involved w/ every aspect of the track as the picture moves forward. #

Very important to be on the set, to absorb the feelings of the whole crew & to be in tune w/ the sensibility of the Director. #

The film exists on 2 levels: the sound that you see and sound that you don't see. #

I am in the biz of manipulating people. Sound can do that. #

Sound can convey emotional complexity without music, just with pure sound. #

When someone wants to work with me, I always asks "Why?" Because it informs what they know about me + what they like. #

On a smaller pictures you're given a lot of latitude, on the other hand limited resources make things challenging. #





courtesy of
paiff.net
twitter.com/PaloAltoIFF
twitter.com/Dolby
facebook.com/PaloAltoInternationalFilmFestival

Related Posts: