Tuesday, December 30, 2008

U.S.O. Project @ Spark Festival 2009

Girl Running, an excerpt from the new forthcoming dvd "InharmoniCity" (Zerofeedback/Synesthesia Recordings © 2009) has been accepted for performance at Spark Festival of Electronic Music and Arts, Minneapolis.

Spark Festival of Electronic Music and Arts / February 18th, 2009 / Cedar Cultural Center, University of Minnesota, Minneapolis, MN.


Matteo Milani and Federico Placidi (aka U.S.O Project - Unidentified Sound Object) + Giovanni Antignano (aka Selfish) / Moving Visual Art



View Larger Map

Monday, December 22, 2008

Winter Wishes 2008

Consequenzas is a free hallucinated and irrational experience based on the counterpoint between a male voice-over (reading a text from T.S.Eliot's "Four Quartets"), and an arabesque of sounds derived from different compositions for solo instruments by Luciano Berio, known as the Sequenze.
The electroacoustic transformation of the materials and the strong alteration of the voice's prosody refer to a far intimate place (in space and time), split between myth and religion, a place where the men's eyes can barely bear the light of truth.

Matteo Milani, Federico Placidi


(Italian)

Il tempo presente e il tempo passato
Son forse presenti entrambi nel tempo futuro,
E il tempo futuro è contenuto nel tempo passato
Se tutto il tempo è eternamente presente
Tutto il tempo è irredimibile.
Ciò che poteva essere è un'astrazione
Che resta una possibilità perpetua
Solo nel mondo delle ipotesi.
Ciò che poteva essere e ciò che è stato
Tendono a un solo fine, che è sempre presente.
Passi echeggiano nella memoria
Lungo il corridoio che non prendemmo
Verso la porta che non aprimmo mai
Sul giardino delle rose. Le mie parole echeggiano
Così, nella vostra mente.
Ma a che scopo
Esse smuovano la polvere su una coppa di foglie di rose
Io non lo so.
Altri echi
Vivono nel giardino. Li seguiremo?
Presto, disse l'uccello, trovateli, trovateli,
Girato l'angolo. Attraverso il primo cancello
Nel nostro primo mondo, seguiremo noi
L'inganno del tordo? Nel nostro primo mondo.
Là essi erano, dignitosi, invisibili,
Si muovevano sulle foglie morte senza calcarle,
Nel caldo autunnale, per l'aria che vibrava,
E l'uccello chiamava, rispondendo a
La musica non udita nascosta tra i cespugli,
E c'era lo sguardo non visto, perché le rose
Avevano l'aspetto di fiori che sono guardati.
Là essi erano, come ospiti nostri, accettati e accettanti.
Così ci muovemmo, noi e loro, cerimoniosamente,
Lungo il vuoto viale, fino al rondò di bosso,
A guardar giù nel laghetto prosciugato.
Secco il laghetto, secco cemento, orlato di bruno,
E il laghetto si riempì d'acqua alla luce del sole,
E adagio adagio si alzarono i fiori del loto,
Scintillò la superfice al cuore della luce,
Ed eccoli dietro di noi, riflessi nel laghetto.
Poi passò una nuvola, e il laghetto fu vuoto.
Via, disse l'uccello, perché le foglie erano piene di bambini
Che si nascondevano, tutti eccitati, sforzandosi di non ridere.
Via, via, via, disse l'uccello: il genere umano
Non può sopportare troppa realtà.
Il tempo passato e il tempo futuro
Ciò che poteva essere e ciò che è stato
Tendono a un solo fine, che è sempre presente.

_______________________________

(English)

Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.
What might have been is an abstraction
Remaining a perpetual possibility
Only in a world of speculation.
What might have been and what has been
Point to one end, which is always present.
Footfalls echo in the memory
Down the passage which we did not take
Towards the door we never opened
Into the rose-garden. My words echo
Thus, in your mind.
But to what purpose
Disturbing the dust on a bowl of rose-leaves
I do not know.
Other echoes
Inhabit the garden. Shall we follow?
Quick, said the bird, find them, find them,
Round the corner. Through the first gate,
Into our first world, shall we follow
The deception of the thrush? Into our first world.
There they were, dignified, invisible,
Moving without pressure, over the dead leaves,
In the autumn heat, through the vibrant air,
And the bird called, in response to
The unheard music hidden in the shrubbery,
And the unseen eyebeam crossed, for the roses
Had the look of flowers that are looked at.
There they were as our guests, accepted and accepting.
So we moved, and they, in a formal pattern,
Along the empty alley, into the box circle,
To look down into the drained pool.
Dry the pool, dry concrete, brown edged,
And the pool was filled with water out of sunlight,
And the lotos rose, quietly, quietly,
The surface glittered out of heart of light,
And they were behind us, reflected in the pool.
Then a cloud passed, and the pool was empty.
Go, said the bird, for the leaves were full of children,
Hidden excitedly, containing laughter.
Go, go, go, said the bird: human kind
Cannot bear very much reality.
Time past and time future
What might have been and what has been
Point to one end, which is always present.


Tuesday, December 16, 2008

An interview with Michael Rubin, pt.3

by Matteo Milani, October 2008 

(Continued from Page 2)

MM: User interface. I read that the sound tracks were shown as parallel vertical bars with time being represented on the vertical axis. So the SoundDroid's timeline scrolled vertically. Is it correct? Was the purpose to mime the optical track on film? 

MR: It wasn't because of the optical track, not really. It was imitating the physical orientation of things in the mixing theater (the mags were loaded vertically on dubbers), because of the physical Cue Sheet that runs vertically. The cue sheet had several columns for notations of footage, fades, volume levels, and equalizations which are used in mixing sound tracks, where each column usually represents one track. It was a dubbing log used to alert the mixer of events happening from the various dubbers in the old days of mixing. I thought the SoundDroid was fun, no one had seen a program like that before, doing the patching right there on the touchscreen - "I want a reverb on this, I want to patch that track to a reverb unit." It was very exciting to show people those kind of functions. 

Novices were stunned. Professionals thrilled. SoundDroid allowed a user to go to a digital library of effects, grab one, and drop it on a track running vertically on the Cue Sheet. There you could slide it around, up and down in time, or over to another track. There were handles on the edges of each sound, so it could be shortened or lengthened into endless loops as desired. By using the tools on the right side of the screen, any given sound (or entire tracks) could be augmented with professional audio controls, including EQ, pan, and an enormous variety of filters. All the tracks were slaved to a videotape with the corresponding picture and a pointing finger - the "now line" showed where you were scrolling through the project. - excerpt from Droidmaker 

SoundDroid sound editor "cuesheet" screen. Each shaded vertical stripe represents a track of sound, with time going from the top to the bottom of the screen. The amplitude envelope in each track is shown by the solid black outline. Text annotations show the incipits of spoken text fragments, among other things. 

 from Droidmaker, by Michael Rubin; photo courtesy of Lucasfilm Ltd. 


MM: Can you talk about the SoundDroid's early “spotting” system for searching sound effects libraries? 

MR: We were taking orders for SoundDroids, but it was still not ready for shipping. The company asked: "Is there a subset of what SoundDroid did, that is working and that we can sell now?" They decided that the spotting system was the part of the system that worked best. They built the SoundDroid Spotter, and digitized most of the Lucasfilm sound library at that time. I don't think they ever sold it, at least not between 1985 and 1987. I demonstrated it at NAB and AES, but it was just kind of abandoned, it went anywhere. This is my recollection about this product. I've never heard about it outside the company people. 

[...] The SoundDroid was too big, too general, and too expensive. There were several smaller markets that a scaled-back version of the technology would more readily be able to address. [...] The first logical result was called SoundDroid Spotter, a stand alone sound effects spotting station that utilized the database on the SUN with basic processing tools. Noise reduction was another. Specialized synthesis was a third. It would take years before the costs of storage and processing would make a digital post-production workstation viable for movie sound. - excerpt from Droidmaker 


MM: Did you interview Ben Burtt

MR: Yes, I know Ben very well, he's great. When I was writing the book, he spent many months of his time talking to me. At that time he was working on Episode III at Lucasfilm. After Revenge came out, he left Lucasfilm after 30 years of career with George, and went to Pixar. I thought it was a very interesting sight, because back in the 80's, he was one of the people who said: "What are these guys in the Computer Division doing all day? It looks like they're not doing anything." And he's here now, going over to Pixar. He's an unbelievable talent, what he did for the sound of Wall-E is an absolute tour de force. 


MM: Burtt, like Walter Murch did before, moved to picture editing, alongside his main career in sound. 

MR: It's fun, because when Ben first got hired from Lucasfilm, it was because Murch wasn't available. Lucas needed a “young Walter Murch” for Star Wars. His job at that time was partly sound recording, but he was also a production assistant. He said he was driving Carrie Fisher to get her hair cut, or drop off some storyboards over ILM, among other things. 


MM: What's the relevance of GUI (Graphical User Interface) in audio/video applications? What did you expect from its evolution in the long future? 

MR: Predicting the future is notoriously difficult. The best way to predict the future is to invent it (Alan Kay). Haptics will change the way we interact with the machines via the tactile feedback. That will be a cool way to interface with the computer. These devices are very expensive today for industry and medicine use, which means this technology will be cheap in ten years. And I think screens will go away for projections of things in the space. I don't think I would really try to predict which interface it will be, I'm still working by a year out. Watch Minority Report, that's good thinking about. I'm pretty sure it will be something like that, where you move around, you pick this up and control something in the computer, somewhere else. It seems logical to me. Just recently I saw an impressive demonstration of a head-mount that non-invasively reads your brain signals and moves objects on a computer. It is early, but I cannot imagine that kind of technology not changing the way we work with picture and sound. 

droidmaker.blogspot.com

[Prev | 1 | 2 | 3 ]


Related Posts:

Bibliography 

Books
Rubin, M. (2006) Droidmaker: George Lucas and the Digital Revolution - Triad Publishing Company Roads, C. (1996) The Computer Music Tutorial - MIT Press

Websites
droidmaker.com
editorsguild.com 
en.wikipedia.org 
nytimes.com 
money.cnn.com
sonicstudio.com 

An interview with Michael Rubin, pt.2

by Matteo Milani, October 2008
(Continued from Page 1)

MM: EditDroid and SoundDroid in the post production workflow: what were their respective roles?

MR: They were designed to work together. One of the biggest selling points, in fact, was that these two devices connected. It wasn't just cutting pictures and cutting sound, you could send the edits from EditDroid to SoundDroid and then you could do your spotting, your mixing. Everything was there. Those features changed the workflow tremendously.
Avid did acquire the technology from the EditDroid, but didn't need it. They actually started before the EditDroid licensing and invented their own product in 1989. Their acquiring in the Lucas technology was more about being inheritor of that, they wanted to feel like the continuation of the work. There were a couple of interesting years when a product called AvidDroid appeared, but was only a marketing thing.
Primitive graphical and sound editors developed along separate paths until the early 1980s. The EditDroid and SoundDroid systems developed at Lucasfilm in California showed how random-access media, interactive graphics, and automated mixing can be combined into a picture-and-sound editing system.
Audiovisual editing can be quite complicated, since the two domains can be interlinked in complex ways. For example, when a video editor changes a scene, many channels of sound must be redone to fit the new sequence of images. Dialogue, effects, and musical soundtracks are all affected by changes in the visual domain. The visual information is often recorded on many tracks, each one corresponding to a particular camera angle or special effects overlay. To manage the complexity of this information, the usual approach taken is to prepare the raw material as a database of audio and visual data (Hawthorne 1985). The first step is to transfer the material to be edited to a random-access medium such as magnetic or optical disk. The author or director annotates the material with pertinent facts about a particular segment, for example, the scene, the take, and other data.
This kind of audiovisual editing is nondestructive; nothing is physically cut or deleted. Edits can be rehearsed and adjusted ad infinitum. Each edit causes the system to write a description of the edit in an 'electronic logbook'. Since the audio and visual editors share a common logbook, any decisions made in picture editing are reflected in the corresponding soundtracks. At the end of an editing session, the result is a list of entries in the logbook that can be used to reconstruct the desired sound and image combination. If the ultimate medium is, for example, a 70-mm film with 6-channel soundtrack, the logbook shows which segments of film and sound must be spliced and mixed to create the finished product.

[The Computer Music Tutorial, Curtis Roads]



MM: You were one of the first employees at Sonic Solutions. What was your duty?
MR: When my boss Robert J. Doris and Mary C. Sauer started a company called Sonic Solutions, they brought me over. Counting the founders Bob, Mary and Jeffrey (Borish), I was the employee number four. James Andy Moorer, the inventor of the SoundDroid. was employee five, although he had been consulting with Bob for a bit by the time he joined. It’s seems a little odd, but when he left Lucasfilm, he came to Sonic after me.
[..] Andy Moorer joined the Sonic team and the company debuted NoNoise®, a Macintosh-based system that applies proprietary DSP algorithms that eliminate broadband background noise, as well as AC hum, HVAC buzz, camera whine and other ambient noises. NoNoise could also reduce overload distortion, acoustical click/pops, transients caused by bad splices and channel breakup from wireless mics—without affecting the original source material. - Mix Magazine
The first restoration project they did was the live album "Live At The Hollywood Bowl" by the American rock band The Doors. It was recorded on July 5, 1968 but not released until 1987. Starting in 1968, every couple of years, when someone had new technology, Bruce Botnick, co-engineer of the Doors' recording, would bring these recordings to them to try to fix the Jim Morrison's track, which was partly unusable due to a cable fault. It was an amazing thing when Sonic finally solved the seemingly impossible problem.
[...] While still in prototype form, NoNoise was used to resurrect a long-lost recording and film of the Doors playing at the Hollywood Bowl in July 1968. Because of a faulty microphone cord, much of Jim Morrison's lead vocal was nearly obliterated by loud clicks and crunching sounds. ''I sent them a digital tape and in three weeks they sent us back a digital tape and it was glorious,'' said Bruce Botnick. ''We wound up saving 12 minutes of the show that would have been unusable.'' - The New York Times
I did sound restoration for many classic records, including a couple of the Greateful Dead albums (Live/Dead and Europe '72). I went through the entire albums, listening on the headphones, looking at the waveform. If I heard any click, I had to manually set the gate, interpolate and listen if I created any anomalies. That was my job in 1987. It took so long, and I usually had the night shift. When de-noising, I had to find some samples of pure noise, but often the tapes were cut right at the beginning of the song, so you didn't have any ambience. It was very hard sometimes to find a piece of noise signal you can use to filter out the track.
Then I went to CMX to help them design and release their own nonlinear editing system, basically it was very much like a next generation EditDroid. A full digital system was coming, but it was cost-prohibitive. At that time it wasn't ready for a feature film movie, it couldn't cut back to the negative. There were rough years from 1989 to 1994-95 when people were really taking risks using something like an Avid.

MM: Where was the SoundDroid originally located?
MR: The SoundDroid sat in a totally acoustically neutral room called Sound Pit in the bottom floor of the C building, a beautiful kind of redwood building in the middle of Lucasfilm campus, a bunch of unlabeled buildings in Kerner Blvd, San Rafael, CA. If you winded up to the building, you didn't know it was Lucasfilm. It just said "Kerner Optical Company" on the door, there were no guards, it was an unsecured campus, because no one knew it was there.
Kerner Optical:
The SoundDroid is based on a SUN workstation. This is about 1985. At the time Macintosh had just come out and the Mac pioneered the startup sound with its little chime. They were making a joke and decided to make a startup sound for the SoundDroid. Partly for fun, partly because they needed to test everything out. They had to run through a test of all the loudspeakers and all the circuits on the SoundDroid.
Andy Moorer came up with a sound to test the speakers out. Today it is known as the THX logo (The Deep Note). At the time no one had ever heard this before, so in this little tiny room, perfectly silent, with speakers all around, it started swirling and collapsed and resolved in this chord and the hair would stand up on your arms and your neck, and a little tear would form in your eyes. That's how it started everyday. Whenever you went in the Sound Pit doing a demonstration, we often just turned it on when people were seating there and watched them freak out.
[Prev | 1 | 2 | 3 | Next]

An interview with Michael Rubin, pt.1

by Matteo Milani, October 2008

Michael Rubin is an educator, author, filmmaker and entrepreneur. After graduating from Brown University with a degree in neuroscience, he began his career at Lucasfilm's Droid Works, where he was instrumental in the development of training for the EditDroid and in introducing the Hollywood market to nonlinear editing for film. From 1985-1994 he designed editing equipment and edited feature films and television shows in Hollywood. In 1991, he was also editor of one of the first television programs to be edited on the Avid Media Composer. Since then, Rubin has lectured internationally, from Montreux to Beijing, and has published a number of texts on editing for professionals as well as consumers. His pioneering book for professionals, Nonlinear: a field guide to digital film and video editing (1990) is now in its fourth edition and is widely used in film schools. Rubin has been a teacher to hundreds of professional film editors. Today he continues to teach, write, shoot video, and consult. I reached Mr. Rubin during his stay in Italy for VIEW Conference 2008. We talked about his successful book Droidmaker: George Lucas and the Digital Revolution (2006) and its expanded universe.



MM: Would you like to describe how the project about "Droidmaker" started?
MR: The first thing that got me to write the book was that I was just so surprised no one else had already done it. The descendents of Lucasfilm’s efforts were everywhere I looked. Secondly, I saw Episode II and realized that I only had a few more years before the entire saga was complete; the time to produce a book was upon me. And thirdly was that all those guys were alive, everyone in the book is still alive, and young enough to remember. You know, you wait too long and suddenly someone dies, losing the chance to talk to him/her. The stories from everybody change in a certain way as you get older.
I didn't just interview them. I debated with them about this. Sometimes I would get them together to discuss what happened, because if you just ask someone: "What happened here, why did you do that?" and so forth, he tells you what he remembers, which may or may not be factual. So I talk to someone else asking for the same story and he tells me a different story and then I ask a third person to tell me the story and he tells me a third story. Then I go back to the first person saying: "These guys said this. Is it possible? Because I think from my research that the other guy has actually got closer, because you were out of town..." We worked through these stories which was very unjournalistic. I wasn't just taking their reportages, I was trying to figure out where to put the puzzle together.
In many cases - not so much with the SoundDroid (also known as the ASP -Audio Signal Processor) - but very much so with Pixar, there's lot of people involved in the story. I would come up with an answer to a scenario of what happened in the seventies and eighties and eventually all these people would agree, but it's not always the way any one person remembered it. George Lucas asked me not to trust written reports of what happened from newspaper articles, but only direct interviews from primary sources. If you have an article, you have to go and figure out if it really happened or not. The journalist is different from an historian. For a while I felt like a journalist, but by the end of the book I felt like an historian. That was an interesting process for me.

MM: Did you join "The Droid Works" to introduce the EditDroid and SoundDroid post production tools to Hollywood?
MR: I worked between the engineers and the editors. The salesmen were helped to convince people to use a “droid” if someone else was using them. The engineers knew about editing, but they didn't always know what was going on in Hollywood. I sat in the middle of that. On the SoundDroid I sat with Stevie Wonder, The Grateful Dead, Barbara Streisand. These people came to see it and I demonstrated to them. I had that kind of role. I was a teacher of the editors, a support for them. I learned a lot about what a computer could do. I was constantly reporting back to engineering about the user interface and what it needed to do, what was lacking and I helped prioritize what we would fix, what was most important. Sales need to do this to sell to a client, editor need to do that or they wouldn’t be able finish the project. If you don't finish the project you'll never sell any more systems, but e.g. we're still in the middle of the project, they try to make a sell sale today and we need the money from the sale to help pay for the engineering. So how do you prioritize? It's tricky. I wasn't the final decider, but I would help in those debates. I continued in that role for many years, alternating between training, demonstrating, product definition, and editing myself. Sometimes people would have a smaller project, and they needed someone who could cut their commercial, low-budget feature film, or music video. Sometimes they brought an editor and I trained him. They were usually happy when I cut it, but at that time, at 24 years old, I was new and young, with little experience. I was an apprentice to Gabriella Cristiani and other fine editors, I learned about editing very quickly from watching them. I also learned a lot about user interface design, which helped me later in other types of work. It was a great job and a magical time, the 80's in this all-new field.

MM: How did movie people perceive the experience of navigating in a nonlinear medium rather than an analog linear one in the early days of digital revolution? Were they scared about those new technologies?
MR: They were unhappy. There was usually a silence for a while. On the EditDroid I was usually editing the scene and they were seated watching me edit. Finding a shot for me was to press a button and it was on the screen. It was easy to pick up shots and move them around. They couldn't believe it. The SoundDroid was more far out. Never before could they see the sound! To see a waveform of sound had never been done before. We could zoom into a waveform, go down to the sample, look at it, zoom out. You literally looked and just said: "That's where the amplitude kicks up and where you cut it". In both cases people can't hide their excitement and their fear. It was a combination of feelings. In a demonstration the products looked good; the problems were often more in the production flow. I was not really showing the drawbacks of those expensive computers, they couldn't be aware of what it was not good at, what was hard to do. There's a process to get the images or sound into the computer, with practical limitations. But just in its functioning there were amazing wonderful things to show them. I was really so excited that they got excited, just all the fun things we could do. I truly challenged them, they hadn't seen anything work as fast. That's how people felt about it. I felt good with people who really cared about great tools.
Most people don't like changing the way they work. They've been doing something one way for their entire career. The cost is really high to suddenly work a different way. This makes them nervous. It would take almost ten more years before this type of equipment took over Hollywood. Some of them used that time to experiment with it, some of them decided to not try at that stage of their careers. I was waiting for the senior people to retire and junior people to come up.
It's hard to remember how scary the new technologies can feel for someone. We are good at computers now, but at that time no one had computers, no one had experience on computers. They routinely crashed, they were complicated, there wasn't the help menu, no manual, no internet. People called me at night all the time, because a workstation cutting motion pictures had to work all the time. I can't tell you how often it crashed halfway through a project. Twenty years ago there was often no backup, you lost everything.
[...] more frequently than anyone would have liked, the machines would crash. Not just a system crash, which required rebooting, but a corrupted hard disk. No matter how often you saved your work, there was nothing that could save you from the death of the disk. - excerpt from Droidmaker
Sometimes this computer was very sensitive. If you did something in the wrong order, it crashed. There's no logic for that. I was very good at using the system, I could keep it from crashing, I worked all the time. I knew how it was built and how to do things the right way.
When you teach someone, sometimes they will do the other way. When I worked with Gabriella Cristiani in Bernardo Bertolucci's The Sheltering Sky, I sat next to her many days of work, watching her hands. You're watching her cut, she's not asking, but you're going to see what she's doing. Somedays I sat watching her fingers and if she was about to press the wrong button, I stopped her gently. I didn't want to bother her, but at the same time I didn't want to crash the system. In these early days you had just to be like sidekick, trying to keep them from making the computer screw up. I was good at that (laugh).

MM: Did you coin the terms "non-linear editing", "random-access editing", "virtual" editing (
Wikipedia)?
MR: I didn't invent the terms. A lot of companies were making this kind of equipment at that time and the sales people would use eventually these words. Sometimes they used all the words: "This is random-access virtual non linear editing system, with film style." I tried to figure out what each expression meant. Random access is really about finding stuff, it means that you access stuff in a non linear way. I want that shot and I don't have to go looking through a bunch of stuff. So I looked at the terms I'd been hearing from people and reading through magazines and I decided that the right term for what this kind of editing was non-linear. Everything else was useful in a certain way, but I decided to pick one term and I would use it and try to educate everybody.
By 1990 I wrote my first book for filmmakers called "Nonlinear", it was the right moment in time. It was still five years before nonlinear editing was widely adopted, but the book helped bring together video tape editors, who knew a lot about technology and nothing about cutting movies, film editors, who knew everything about cutting movies but nothing about video or computers and computer users, who are great with the equipment but they didn't know much about editing. It created the language for everybody to use and to help each other going into and through the nineties. So I didn’t invent the term, but I probably popularized it as defining these editing tools.

MM: What's your thought about physical and chemical-mechanical approach on video/sound editing (film, tape), compared to numbers as symbolic representation of images and sound?
MR: Film editors are like tailors, they make beautiful clothes and have a feeling for the material. They can feel the scene and they know that it's good. I did a lot of dark room (photography) and I loved the smell and feel of film. Something is lost when you lose the tactile nature of film and sound. You can do more with the computer and that's not a question. If you're a craftsman, an artist like a sculptor, and you like working clay, you can make beautiful things out of that. No one can tell you that doing a 3D model on a computer, which is way more flexible, is better than clay. You are good at clay. Something is lost, there's no doubt about it, when you leave clay behind. But computers are great and have other attributes. I was sad to facilitate the loss of the art form of cutting film. At the same time, you can't stop the advance of technology, pretend it wasn't going to happen. I thought the best we could do when we're promoting this new technology was to embrace what was going to be lost and the work on transferring the knowledge of the people on how to use it, teaching young people how to do it. I believe in apprenticeship, that's important.
[1 | 2 | 3 | Next]

Sunday, December 14, 2008

David Lynch on Sound

Via FilmInFocus, here's an excerpt from the interview with David Lynch included in the book Soundscape: The School of Sound Lectures 1998-2001 edited by Larry Sider. Since 1998, the School Of Sound has presented stimulating and provocative series of masterclasses by practitioners, artists and academics on the creative use of sound with image. At the School of Sound you will not learn about hardware or software. Directors, sound designers, composers, editors and theorists reveal the methods, theories and creative thinking that lie behind the most effective uses of sound and music.

SOS: A final word on the relationship between sound and image in cinema…?

DL: Sound is fifty percent of a film, at least. In some scenes it’s almost a hundred percent. It’s the thing that can add so much emotion to a film. It’s a thing that can add all the mood and create a larger world. It sets the tone and it moves things. Sound is a great “pull” into a different world. And it has to work with the picture – but without it you’ve lost half the film.

It’s so beautiful. It has to do with all the parts coming together in a correct way. And certain stories allow more to happen in terms of cinema than other stories. But with sequences paced correctly, and the sound and the picture working together, it becomes like music. It’s like a symphony where you are conducting with great musicians and everybody is working together. And the groundwork has to be set-up in a certain way because it slides into this thing where all you’ve done before is now the payoff. And, because of what’s gone before and the way it’s gone before, this payoff can be unbelievable. Everything is working together and it can transport you. It can give you a feeling that you can’t have in any other way. And it can introduce ideas that are so abstract that you’ve never thought of them or experienced them.

But it has to do with the way cinema can work, it’s really a rare event, because there’s not that many people experimenting with cinema – it’s gone down to telling a surface story. But there’s this form – people, the audience, now know the form. They know that there’s a certain amount of time spent introducing this, then there’s this, and then there’s the next part – so they feel the end coming. And so there’s not a lot of room for surprises. There are a lot of ways to make the cinema. There are stories that will change the form and those are the kinds of stories I really love. And these stories make you work with the sound. When it works, it’s a thrill, it’s a magical thing and it takes you to, you know, a higher place.

[schoolofsound.co.uk]

[filminfocus.com]

Friday, December 12, 2008

Chris Watson: Whispering in the leaves

Acclaimed British sound artist Chris Watson creates a sensory journey fusing wildlife and location recordings from the Americas. You can experience the soundtrack of the South American rainforest diffused through the tropical foliage of a botanical environment at Perth Planetarium Pyramid (13 February–8 March).

[www.perthfestival.com.au]

Saturday, December 06, 2008

Waveform Series by Sakurako Shimizu

Waveform Series is the laser-cut shape of waveforms in sound editing applications by japanese jewelry designer Sakurako Shimizu. She used some human sound sources such as yawn, atchoum, giggle, wow, and the sound of a church bell. And what about the "I do" Wedding Band?

Bell (Cuff)


Giggle - detail view (Necklace)

Friday, November 21, 2008

ACM Interview: Tomlinson Holman

Winner of the 2001 Academy Award for Scientific and Technical Achievement, Mr. Tomlinson Holman is Professor of Film Sound at the USC School of Cinema-Television and a Principal Investigator in the Integrated Media Systems Center at the university.
Tom was chief electrical engineer at Advent Corporation, founded Apt Corporation, maker of the Apt/Holman preamplifier, and was at Lucasfilm for 15 years where he developed the THX Sound System and its companions the Theater Alignment Program, Home THX, and the THX Digital Mastering program.
He is founding editor of Surround Professional magazine, and author of the books Sound for Film and Television and 5.1 Surround Sound Up and Running. He is an honorary member of the Cinema Audio Society and the Motion Picture Sound Editors. He is a fellow of the Audio Engineering Society, the British Kinematograph Sound and Television Society, and the Society of Motion Picture and Television Engineers. He is a member of the Acoustical Society of America and the IEEE. He has lifetime or career achievement awards from the CAS and the Custom Electronics Design and Installation Association. Tom holds 7 U.S. and corresponding foreign patents totaling 23, and they have been licensed to over 45 companies.


Supplemental videos (.mov):

Does THX stand for Tom Holman Experiments?

Can you tell us the history of THX and how you came up with the idea?

How do you see the future of digital television and digital cinema?

How do you see the future of multichannel music?

What do you think about "Wave field Synthesis" and other researches in audio technology?

What is the next generation 10.2 channel sound system that you're working on?

[via Association for Computing Machinery]
[visit tmhlabs.com]

Saturday, November 15, 2008

The winners of the Giga-Hertz Award

The Giga-Hertz Award (given by the ZKM | Centre for Art and Media Karlsruhe and the EXPERIMENTALSTUDIO des SWR Freiburg) addresses composers working in the areas of electronic and acousmatic music. A Grand Prize and four Special Prizes are awarded once a year by an international jury.
The Grand Prize honors the artistic and life’s work of renowned composers of electronic and acousmatic music.
This year the Grand Price of € 15.000 goes to the british composer Trevor Wishart. The four Special Prices, each worth € 8.000 go to Natasha Barrett (born in Britain), Dai Fujikura (born in Japan), the Portuguese João Pedro Paiva de Oliveira and to Åke Parmerud from Sweden.

Members of the jury 2008 are Ludger Brümmer, Detlef Heusinger, François Bayle, pioneer of electroacoustic and acousmatic music who developed the famous Acousmonium, Jonathan Harvey, Armin Köhler and Peter Weibel.
They said: "Particularly with his developments in the area of the software synthesis, he made complex sound processing understandable and applicable. Trevor Wishart designed in 1986, together with other composers, the Composers Desktop Project for Atari and Unix computers. Is to be emphasized also his work in the musical education and the formulation of its aesthetic and technical approaches in educationally oriented writings such as 'Audible design' and 'On Sonic Art'. [...] His music is enormously effective and has influenced generations of composers. "

The award ceremony will be held at the ZKM | Karlsruhe on November 29th, 2008 (starting at 7.00pm), in line with two Award-Winner-Concerts on November 28th and November 29th (8.30pm).

[via giga-hertz.de]

Thursday, November 13, 2008

Ben Burtt interview - Upcoming Pixar

[photo via sfgate.com]

Sound Designer Ben Burtt moved from LucasFilm to Pixar a few years ago and headed up Sound Design on WALL•E. Have a read of this Q&A below with Burtt, thanks to Upcoming Pixar.

Some excerpts worth mentioning:

BB: I have always felt that the best way to get a robot voice is to have a human element and an electronic element and blend the two. So I worked out a circuit where I started with my voice and broke that down in the computer and then re-synthesized it. And the voice of EVE was done in a similar way. We used a woman at Pixar, who was named Elissa Knight. We started using her as a scratch track and once again, just like with me, once I ran it through the laborious computer process, we got results that we liked, and we felt we should keep it.

BB: I’ve always found, when you’re trying to create illusions with sound, especially in a science fiction or fantasy movie, that pulling sounds from the world around us is a great way to cement that illusion because you can go out and record an elevator in George Lucas’s house or something, and it will have that motor sound. It will be an elevator and you might associate it with that, but if you use it in a movie people will believe it’s a force field, or maybe it’s the sound of a spaceship door opening.
[...] It’s forging those connections between familiar sound and illusionary sound that I think is the basis of the success for a lot of the sounds that sound designers have put in these movies.

BB: I’ve been on this film for three years, so the work was being embedded right from the beginning, sometimes we would do some sounds and then do an animation test to try those sounds out. Those kinds of opportunities are great. So of course I’m very proud of that, what film gives you a chance to do sound effects as well as key voices in the film. Maybe the only other big assignment would be to do a movie with no music and see where you could go...

[some fun facts about
WALL•E’s sound design - promotional PDF]

[read more via pixarplanet.com]

Wednesday, November 12, 2008

Daniel Teruggi: GRM at 50

On Friday, November 14, Electronic Music Foundation will honor the 50th anniversary of the Paris-based Groupe de Recherches Musicales (Music Research Group), one of the world's major pioneering organizations in electronic music, with guests like Daniel Teruggi, director of GRM, François Bayle, past director, and Marc Battier, professor at the Sorbonne.

Teruggi @ Tempo Reale, 17th May 2008


The story of GRM begins 60 years ago, in April 1948, when Pierre Schaeffer, an engineer for the French Broadcasting Company, took a sound truck to a railroad switching yard at Batignolles, in Paris, to record the sounds of steam locomotives, wheels, and whistles. He used the sounds to compose Etude aux Chemins de Fer (Railroad Study) and coined the term 'musique concrète' to mean music composed with sounds (as against symbols for sounds). Following an enormously successful broadcast of a Concert de Bruits (Concert of Noises) in October 1948, he initiated a succession of research projects that culminated in 1958 in the official formation of the Groupe de Recherches Musicales with François Bayle, Luc Ferrari, Iannis Xenakis, and several other composers. The goal was to explore the new creative possibilities in using all sounds in music. The result was the beginning of a musical revolution.

[via Suzanne Thorpe - Arts Electric]

[Listen to Daniel Teruggi speak at Arts Electric]
[more on the concert]

THX 1138

THX 1138 is a 1971 science fiction film directed by George Lucas, from a screenplay by Lucas and Walter Murch.
It was the first feature-length film directed by Lucas, and a more developed, feature-length version of his student film Electronic Labyrinth: THX 1138 4EB, which he made in 1967 while attending the University of Southern California, based on a one and a quarter page treatment of an idea by Matthew Robbins.
The film was produced in a joint venture between Warner Brothers and Francis Ford Coppola's then-new production company, American Zoetrope.
The film's use of the number 1138 has become an in-joke in popular culture, and more commonly in works by Lucas and Steven Spielberg. Combinations of the title and number can be found in several Lucasfilm releases, including American Graffiti, and the Star Wars and Indiana Jones films.
According to Tomlinson Holman, the inventor of the THX system, the name of the technology was deliberately chosen because it contained both a reference to his name, and to Lucas's THX 1138. The original name was "Tom Holman's Crossover" (Crossover being sometimes referred to as Xover) or the "Tom Holman eXperiment."

[source: Wikipedia]

Clip from THX 1138 with commentary part 1 of 2


Clip from THX 1138 with commentary part 2 of 2



Photo set by Marc Wathieu


Buy more. Buy more now. Buy more and be happy.


[www.thx1138movie.com]

Tuesday, November 11, 2008

De Musica 2008: the creativity factory

DE MUSICA 2008, the week-long workshop by Nuova Consonanza will be conducted by Alvise Vidolin. He is one of the protagonists of experimentation in electronics and he collaborated with the most important composers of the 20th century. A pioneer of computer music, he is a sound director and interpreter of Live Electronics.

The workshop is focused on the elaboration of sound in real time and the use of computer for contemporary music, through the analysis of electroacoustic works by composers Vidolin has worked with. He will also explore the most updated systems of elaboration in real time and their application in multimedia field.

Lessons will be subdivided into three sections:
theory
listening and analysis
experimental workshop and practice

Theoretical lessons will develop the following subjects: process of live elaboration of voices and instruments; sound spatialization; techniques of live interaction and performance practice; system design for live electronics; performance environment design.
Analysis of live-electronics works by Giorgio Battistelli, Adriano Guarnieri, Luigi Nono and Salvatore Sciarrino will be carried out for solo, ensemble or musical theatre.
The workshop will take place in Max/MSP environment and will include the analysis and rehearsal of the pieces for the final concert.

[read more - via nuovaconsonanza.it]

Saturday, November 08, 2008

Kim Cascone: the grain of the auditory field

Kim Cascone on diffusing field recordings outside of a performance space:

[...] Everyday auditory fields are complex aggregations, pools of sound, local and asynchronous interlocking fields of cyclic patterns; loosely intersecting, meshing and mixing into a contiguous background. The local habits and routines of people fuse into a rhythmic din, a tapestry woven from minute activities. Sounds are shaped into a seemingly random structure; we hear but don't listen to it.

[...] Each smaller zone of noise forms an aural or auditory field that we circumscribe with a perceptual horizon. If we were able to zoom out and see an acoustic map we would see many smaller fields intersecting with one another. There are no acoustic walls or boundaries in an open space. We store these structures as acoustic mementos or snapshots in time.

[...] This led to the idea of creating a new field in an existing field by diffusing sounds already present or by adding new unexpected sounds. By using an array of speakers placed in strategic positions within a field one could diffuse sounds and move them in space.

[...] One solution would be to borrow from the practice of acousmatic diffusion and diminish the role of the artist by physically locating them somewhere where they wouldn't be visible. Another way to prevent bracketing is to not announce it as an event with a location and start time. This way the people who visit the space would be in their usual non-linear listening mode for that environment and receive any sounds heard as being part of the soundscape.

[...] If the growing movement of field recording makes its focus the sublimity of the auditory field then work needs to progress on not falling into formulaic formats because they are standard and easy to interface with. If field recording is to escape the ghetto of sound souvenir or audio puzzle then break the habits which end up relying on technology to make it interesting.
Auditory fields are not music and by trying to present them as such we end up depleting them of their grain and deadening their soul, leaving little of value to share with the listener.

[excerpts from ‘the grain of the auditory field’ - pdf - by Kim Cascone]

Field diffusion


[via kitchenbudapest]

Friday, November 07, 2008

Burtt blasts again!

Wall-E: DVD Bonus - Animation Sound Design


[via traileraddict.com]

Ladies and gentlemen, Ben Burtt

Collider has posted four must-watch videos of Ben Burtt giving a demonstration on how he brought Wall-E to life!

[Part 1 - download flv]


[Part 2 - download flv]


[Part 3 - download flv]


[Part 4 - download flv]


[via pixarplanet.com]

Thursday, November 06, 2008

Live!iXem 2008

Festival of experimental electronic music and arts

18>22 November 2008 - Palermo, Italy

V Edition



18 nov. 21.30 – Goethe Institut – Via Paolo Gili 4
Antony Pateras (pianoforte preparato), Sean Baxter (percussioni) e David Brown (chitarra elettrica preparata).

19 nov. 21.30 – Malausséne – Piazzetta Resuttano 4
AMP2 + SECRET GUESTS Live performances

20 nov. 19.30 > 24.00 – Left – Via degli Schioppettieri 8
AV screenings by Torregrossa

21 nov. 21.30 – ASK 191 – Viale Strasburgo 191
SonoRoom Live sets: Marco Pianges, Dario Sanfilippo, Samuele Calabrò, Gandolfo Pagano.
DanceFlor DJ sets: Ajno [minimal <+/-> techno | dubstep] and More!

22 nov. 21.30 – ASK 191 – Viale Strasburgo 191
SonoRoom Live sets: Nino Secchia e Dario Sanfilippo, Talachtis, Mathias Manceck+Raffaella Piccolo, Il cielo di Baghdad.
DanceFloor DJ sets: D.M.D. Death Modern Dance, Claudio Bonanno e Gianluca Scuderi, Ajno.

[ixem.it]

Monday, November 03, 2008

Sound Design workshop: report


Our friends Gianpaolo D'Amico and Sara Lenzi had published the report (with sound excerpts, Italian only) of the Sound Design: new forms of creative communications workshop, that took place the 26th October 2008 at the Festival of Creativity in Florence.

[leggi l'articolo - read the article]

[more photos here]

Saturday, November 01, 2008

Exciting WALL•E DVD Bonus Feature

Animation Sound Design: Building Worlds from The Sound Up – Ben Burtt, the acclaimed Oscar-winning sound designer, introduces viewers to the art of sound design using examples from WALL•E and historic footage of the early Disney sound effects masters at work.

[via slashfilm.com]

Friday, October 31, 2008

U.S.O. Project in Blow Up Magazine #126, November 2008

U.S.O. Project, aka Matteo Milani and Federico Placidi, interviewed by Michele Coralli in Blow Up magazine.

Blow Up is a monthly Italian magazine about “rock & other contaminations”: out rock, electronica, techno, house, experimental, industrial, improv/jazz, traditional.

[blowupmagazine.com]

[subscriptions]

The Conversation: opening sequence

Editor and Sound Mixer Walter Murch’s comments about this opening sequence [mp3]:


(click to enlarge)

A hovering perspective, hypnotic in its descent. Union Square bustles. Through the first transition programmable servo lens, engineered for this very shot, we hone in on a thousand little theaters settling on a disquieting mime. Long reaching shadows breed paranoia as the wondrous sound design is at once jazzy and dissonant, much like Harry Caul himself.

[click to watch - format: QuickTime H.264, 720×400 | Size: 37 MB]

[via artofthetitle.com]
[thanks to ut.tumblr.com]

Wednesday, October 29, 2008

Oscillators behind the glass

Between 1955 and 1983, in Milan is developing one of the most important musical experiences of the 20th century. It is called Studio di Fonologia Musicale and collect, as well as the most technologically advanced machines of the time, even the finest musicians, who converged there to produce the first real electronic music in Italy that has ever been produced.

[leggi l'articolo - read the full article]
[download the interview with Maddalena Novati - mp3]
[inside Rai Musical Phonology Studio]

[via altremusiche.it - by Michele Coralli]