Tuesday, December 30, 2008

U.S.O. Project @ Spark Festival 2009

Girl Running, an excerpt from the new forthcoming dvd "InharmoniCity" (Zerofeedback/Synesthesia Recordings © 2009) has been accepted for performance at Spark Festival of Electronic Music and Arts, Minneapolis.

Spark Festival of Electronic Music and Arts / February 18th, 2009 / Cedar Cultural Center, University of Minnesota, Minneapolis, MN.

Matteo Milani and Federico Placidi (aka U.S.O Project - Unidentified Sound Object) + Giovanni Antignano (aka Selfish) / Moving Visual Art

View Larger Map

Monday, December 22, 2008

Winter Wishes 2008

Consequenzas is a free hallucinated and irrational experience based on the counterpoint between a male voice-over (reading a text from T.S.Eliot's "Four Quartets"), and an arabesque of sounds derived from different compositions for solo instruments by Luciano Berio, known as the Sequenze.
The electroacoustic transformation of the materials and the strong alteration of the voice's prosody refer to a far intimate place (in space and time), split between myth and religion, a place where the men's eyes can barely bear the light of truth.

Matteo Milani, Federico Placidi


Il tempo presente e il tempo passato
Son forse presenti entrambi nel tempo futuro,
E il tempo futuro è contenuto nel tempo passato
Se tutto il tempo è eternamente presente
Tutto il tempo è irredimibile.
Ciò che poteva essere è un'astrazione
Che resta una possibilità perpetua
Solo nel mondo delle ipotesi.
Ciò che poteva essere e ciò che è stato
Tendono a un solo fine, che è sempre presente.
Passi echeggiano nella memoria
Lungo il corridoio che non prendemmo
Verso la porta che non aprimmo mai
Sul giardino delle rose. Le mie parole echeggiano
Così, nella vostra mente.
Ma a che scopo
Esse smuovano la polvere su una coppa di foglie di rose
Io non lo so.
Altri echi
Vivono nel giardino. Li seguiremo?
Presto, disse l'uccello, trovateli, trovateli,
Girato l'angolo. Attraverso il primo cancello
Nel nostro primo mondo, seguiremo noi
L'inganno del tordo? Nel nostro primo mondo.
Là essi erano, dignitosi, invisibili,
Si muovevano sulle foglie morte senza calcarle,
Nel caldo autunnale, per l'aria che vibrava,
E l'uccello chiamava, rispondendo a
La musica non udita nascosta tra i cespugli,
E c'era lo sguardo non visto, perché le rose
Avevano l'aspetto di fiori che sono guardati.
Là essi erano, come ospiti nostri, accettati e accettanti.
Così ci muovemmo, noi e loro, cerimoniosamente,
Lungo il vuoto viale, fino al rondò di bosso,
A guardar giù nel laghetto prosciugato.
Secco il laghetto, secco cemento, orlato di bruno,
E il laghetto si riempì d'acqua alla luce del sole,
E adagio adagio si alzarono i fiori del loto,
Scintillò la superfice al cuore della luce,
Ed eccoli dietro di noi, riflessi nel laghetto.
Poi passò una nuvola, e il laghetto fu vuoto.
Via, disse l'uccello, perché le foglie erano piene di bambini
Che si nascondevano, tutti eccitati, sforzandosi di non ridere.
Via, via, via, disse l'uccello: il genere umano
Non può sopportare troppa realtà.
Il tempo passato e il tempo futuro
Ciò che poteva essere e ciò che è stato
Tendono a un solo fine, che è sempre presente.



Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.
What might have been is an abstraction
Remaining a perpetual possibility
Only in a world of speculation.
What might have been and what has been
Point to one end, which is always present.
Footfalls echo in the memory
Down the passage which we did not take
Towards the door we never opened
Into the rose-garden. My words echo
Thus, in your mind.
But to what purpose
Disturbing the dust on a bowl of rose-leaves
I do not know.
Other echoes
Inhabit the garden. Shall we follow?
Quick, said the bird, find them, find them,
Round the corner. Through the first gate,
Into our first world, shall we follow
The deception of the thrush? Into our first world.
There they were, dignified, invisible,
Moving without pressure, over the dead leaves,
In the autumn heat, through the vibrant air,
And the bird called, in response to
The unheard music hidden in the shrubbery,
And the unseen eyebeam crossed, for the roses
Had the look of flowers that are looked at.
There they were as our guests, accepted and accepting.
So we moved, and they, in a formal pattern,
Along the empty alley, into the box circle,
To look down into the drained pool.
Dry the pool, dry concrete, brown edged,
And the pool was filled with water out of sunlight,
And the lotos rose, quietly, quietly,
The surface glittered out of heart of light,
And they were behind us, reflected in the pool.
Then a cloud passed, and the pool was empty.
Go, said the bird, for the leaves were full of children,
Hidden excitedly, containing laughter.
Go, go, go, said the bird: human kind
Cannot bear very much reality.
Time past and time future
What might have been and what has been
Point to one end, which is always present.

Tuesday, December 16, 2008

An interview with Michael Rubin, pt.3

by Matteo Milani, October 2008 

(Continued from Page 2)

MM: User interface. I read that the sound tracks were shown as parallel vertical bars with time being represented on the vertical axis. So the SoundDroid's timeline scrolled vertically. Is it correct? Was the purpose to mime the optical track on film? 

MR: It wasn't because of the optical track, not really. It was imitating the physical orientation of things in the mixing theater (the mags were loaded vertically on dubbers), because of the physical Cue Sheet that runs vertically. The cue sheet had several columns for notations of footage, fades, volume levels, and equalizations which are used in mixing sound tracks, where each column usually represents one track. It was a dubbing log used to alert the mixer of events happening from the various dubbers in the old days of mixing. I thought the SoundDroid was fun, no one had seen a program like that before, doing the patching right there on the touchscreen - "I want a reverb on this, I want to patch that track to a reverb unit." It was very exciting to show people those kind of functions. 

Novices were stunned. Professionals thrilled. SoundDroid allowed a user to go to a digital library of effects, grab one, and drop it on a track running vertically on the Cue Sheet. There you could slide it around, up and down in time, or over to another track. There were handles on the edges of each sound, so it could be shortened or lengthened into endless loops as desired. By using the tools on the right side of the screen, any given sound (or entire tracks) could be augmented with professional audio controls, including EQ, pan, and an enormous variety of filters. All the tracks were slaved to a videotape with the corresponding picture and a pointing finger - the "now line" showed where you were scrolling through the project. - excerpt from Droidmaker 

SoundDroid sound editor "cuesheet" screen. Each shaded vertical stripe represents a track of sound, with time going from the top to the bottom of the screen. The amplitude envelope in each track is shown by the solid black outline. Text annotations show the incipits of spoken text fragments, among other things. 

 from Droidmaker, by Michael Rubin; photo courtesy of Lucasfilm Ltd. 

MM: Can you talk about the SoundDroid's early “spotting” system for searching sound effects libraries? 

MR: We were taking orders for SoundDroids, but it was still not ready for shipping. The company asked: "Is there a subset of what SoundDroid did, that is working and that we can sell now?" They decided that the spotting system was the part of the system that worked best. They built the SoundDroid Spotter, and digitized most of the Lucasfilm sound library at that time. I don't think they ever sold it, at least not between 1985 and 1987. I demonstrated it at NAB and AES, but it was just kind of abandoned, it went anywhere. This is my recollection about this product. I've never heard about it outside the company people. 

[...] The SoundDroid was too big, too general, and too expensive. There were several smaller markets that a scaled-back version of the technology would more readily be able to address. [...] The first logical result was called SoundDroid Spotter, a stand alone sound effects spotting station that utilized the database on the SUN with basic processing tools. Noise reduction was another. Specialized synthesis was a third. It would take years before the costs of storage and processing would make a digital post-production workstation viable for movie sound. - excerpt from Droidmaker 

MM: Did you interview Ben Burtt

MR: Yes, I know Ben very well, he's great. When I was writing the book, he spent many months of his time talking to me. At that time he was working on Episode III at Lucasfilm. After Revenge came out, he left Lucasfilm after 30 years of career with George, and went to Pixar. I thought it was a very interesting sight, because back in the 80's, he was one of the people who said: "What are these guys in the Computer Division doing all day? It looks like they're not doing anything." And he's here now, going over to Pixar. He's an unbelievable talent, what he did for the sound of Wall-E is an absolute tour de force. 

MM: Burtt, like Walter Murch did before, moved to picture editing, alongside his main career in sound. 

MR: It's fun, because when Ben first got hired from Lucasfilm, it was because Murch wasn't available. Lucas needed a “young Walter Murch” for Star Wars. His job at that time was partly sound recording, but he was also a production assistant. He said he was driving Carrie Fisher to get her hair cut, or drop off some storyboards over ILM, among other things. 

MM: What's the relevance of GUI (Graphical User Interface) in audio/video applications? What did you expect from its evolution in the long future? 

MR: Predicting the future is notoriously difficult. The best way to predict the future is to invent it (Alan Kay). Haptics will change the way we interact with the machines via the tactile feedback. That will be a cool way to interface with the computer. These devices are very expensive today for industry and medicine use, which means this technology will be cheap in ten years. And I think screens will go away for projections of things in the space. I don't think I would really try to predict which interface it will be, I'm still working by a year out. Watch Minority Report, that's good thinking about. I'm pretty sure it will be something like that, where you move around, you pick this up and control something in the computer, somewhere else. It seems logical to me. Just recently I saw an impressive demonstration of a head-mount that non-invasively reads your brain signals and moves objects on a computer. It is early, but I cannot imagine that kind of technology not changing the way we work with picture and sound. 


[Prev | 1 | 2 | 3 ]

Related Posts:


Rubin, M. (2006) Droidmaker: George Lucas and the Digital Revolution - Triad Publishing Company Roads, C. (1996) The Computer Music Tutorial - MIT Press


An interview with Michael Rubin, pt.2

by Matteo Milani, October 2008
(Continued from Page 1)

MM: EditDroid and SoundDroid in the post production workflow: what were their respective roles?

MR: They were designed to work together. One of the biggest selling points, in fact, was that these two devices connected. It wasn't just cutting pictures and cutting sound, you could send the edits from EditDroid to SoundDroid and then you could do your spotting, your mixing. Everything was there. Those features changed the workflow tremendously.
Avid did acquire the technology from the EditDroid, but didn't need it. They actually started before the EditDroid licensing and invented their own product in 1989. Their acquiring in the Lucas technology was more about being inheritor of that, they wanted to feel like the continuation of the work. There were a couple of interesting years when a product called AvidDroid appeared, but was only a marketing thing.
Primitive graphical and sound editors developed along separate paths until the early 1980s. The EditDroid and SoundDroid systems developed at Lucasfilm in California showed how random-access media, interactive graphics, and automated mixing can be combined into a picture-and-sound editing system.
Audiovisual editing can be quite complicated, since the two domains can be interlinked in complex ways. For example, when a video editor changes a scene, many channels of sound must be redone to fit the new sequence of images. Dialogue, effects, and musical soundtracks are all affected by changes in the visual domain. The visual information is often recorded on many tracks, each one corresponding to a particular camera angle or special effects overlay. To manage the complexity of this information, the usual approach taken is to prepare the raw material as a database of audio and visual data (Hawthorne 1985). The first step is to transfer the material to be edited to a random-access medium such as magnetic or optical disk. The author or director annotates the material with pertinent facts about a particular segment, for example, the scene, the take, and other data.
This kind of audiovisual editing is nondestructive; nothing is physically cut or deleted. Edits can be rehearsed and adjusted ad infinitum. Each edit causes the system to write a description of the edit in an 'electronic logbook'. Since the audio and visual editors share a common logbook, any decisions made in picture editing are reflected in the corresponding soundtracks. At the end of an editing session, the result is a list of entries in the logbook that can be used to reconstruct the desired sound and image combination. If the ultimate medium is, for example, a 70-mm film with 6-channel soundtrack, the logbook shows which segments of film and sound must be spliced and mixed to create the finished product.

[The Computer Music Tutorial, Curtis Roads]

MM: You were one of the first employees at Sonic Solutions. What was your duty?
MR: When my boss Robert J. Doris and Mary C. Sauer started a company called Sonic Solutions, they brought me over. Counting the founders Bob, Mary and Jeffrey (Borish), I was the employee number four. James Andy Moorer, the inventor of the SoundDroid. was employee five, although he had been consulting with Bob for a bit by the time he joined. It’s seems a little odd, but when he left Lucasfilm, he came to Sonic after me.
[..] Andy Moorer joined the Sonic team and the company debuted NoNoise®, a Macintosh-based system that applies proprietary DSP algorithms that eliminate broadband background noise, as well as AC hum, HVAC buzz, camera whine and other ambient noises. NoNoise could also reduce overload distortion, acoustical click/pops, transients caused by bad splices and channel breakup from wireless mics—without affecting the original source material. - Mix Magazine
The first restoration project they did was the live album "Live At The Hollywood Bowl" by the American rock band The Doors. It was recorded on July 5, 1968 but not released until 1987. Starting in 1968, every couple of years, when someone had new technology, Bruce Botnick, co-engineer of the Doors' recording, would bring these recordings to them to try to fix the Jim Morrison's track, which was partly unusable due to a cable fault. It was an amazing thing when Sonic finally solved the seemingly impossible problem.
[...] While still in prototype form, NoNoise was used to resurrect a long-lost recording and film of the Doors playing at the Hollywood Bowl in July 1968. Because of a faulty microphone cord, much of Jim Morrison's lead vocal was nearly obliterated by loud clicks and crunching sounds. ''I sent them a digital tape and in three weeks they sent us back a digital tape and it was glorious,'' said Bruce Botnick. ''We wound up saving 12 minutes of the show that would have been unusable.'' - The New York Times
I did sound restoration for many classic records, including a couple of the Greateful Dead albums (Live/Dead and Europe '72). I went through the entire albums, listening on the headphones, looking at the waveform. If I heard any click, I had to manually set the gate, interpolate and listen if I created any anomalies. That was my job in 1987. It took so long, and I usually had the night shift. When de-noising, I had to find some samples of pure noise, but often the tapes were cut right at the beginning of the song, so you didn't have any ambience. It was very hard sometimes to find a piece of noise signal you can use to filter out the track.
Then I went to CMX to help them design and release their own nonlinear editing system, basically it was very much like a next generation EditDroid. A full digital system was coming, but it was cost-prohibitive. At that time it wasn't ready for a feature film movie, it couldn't cut back to the negative. There were rough years from 1989 to 1994-95 when people were really taking risks using something like an Avid.

MM: Where was the SoundDroid originally located?
MR: The SoundDroid sat in a totally acoustically neutral room called Sound Pit in the bottom floor of the C building, a beautiful kind of redwood building in the middle of Lucasfilm campus, a bunch of unlabeled buildings in Kerner Blvd, San Rafael, CA. If you winded up to the building, you didn't know it was Lucasfilm. It just said "Kerner Optical Company" on the door, there were no guards, it was an unsecured campus, because no one knew it was there.
Kerner Optical:
The SoundDroid is based on a SUN workstation. This is about 1985. At the time Macintosh had just come out and the Mac pioneered the startup sound with its little chime. They were making a joke and decided to make a startup sound for the SoundDroid. Partly for fun, partly because they needed to test everything out. They had to run through a test of all the loudspeakers and all the circuits on the SoundDroid.
Andy Moorer came up with a sound to test the speakers out. Today it is known as the THX logo (The Deep Note). At the time no one had ever heard this before, so in this little tiny room, perfectly silent, with speakers all around, it started swirling and collapsed and resolved in this chord and the hair would stand up on your arms and your neck, and a little tear would form in your eyes. That's how it started everyday. Whenever you went in the Sound Pit doing a demonstration, we often just turned it on when people were seating there and watched them freak out.
[Prev | 1 | 2 | 3 | Next]

An interview with Michael Rubin, pt.1

by Matteo Milani, October 2008

Michael Rubin is an educator, author, filmmaker and entrepreneur. After graduating from Brown University with a degree in neuroscience, he began his career at Lucasfilm's Droid Works, where he was instrumental in the development of training for the EditDroid and in introducing the Hollywood market to nonlinear editing for film. From 1985-1994 he designed editing equipment and edited feature films and television shows in Hollywood. In 1991, he was also editor of one of the first television programs to be edited on the Avid Media Composer. Since then, Rubin has lectured internationally, from Montreux to Beijing, and has published a number of texts on editing for professionals as well as consumers. His pioneering book for professionals, Nonlinear: a field guide to digital film and video editing (1990) is now in its fourth edition and is widely used in film schools. Rubin has been a teacher to hundreds of professional film editors. Today he continues to teach, write, shoot video, and consult. I reached Mr. Rubin during his stay in Italy for VIEW Conference 2008. We talked about his successful book Droidmaker: George Lucas and the Digital Revolution (2006) and its expanded universe.

MM: Would you like to describe how the project about "Droidmaker" started?
MR: The first thing that got me to write the book was that I was just so surprised no one else had already done it. The descendents of Lucasfilm’s efforts were everywhere I looked. Secondly, I saw Episode II and realized that I only had a few more years before the entire saga was complete; the time to produce a book was upon me. And thirdly was that all those guys were alive, everyone in the book is still alive, and young enough to remember. You know, you wait too long and suddenly someone dies, losing the chance to talk to him/her. The stories from everybody change in a certain way as you get older.
I didn't just interview them. I debated with them about this. Sometimes I would get them together to discuss what happened, because if you just ask someone: "What happened here, why did you do that?" and so forth, he tells you what he remembers, which may or may not be factual. So I talk to someone else asking for the same story and he tells me a different story and then I ask a third person to tell me the story and he tells me a third story. Then I go back to the first person saying: "These guys said this. Is it possible? Because I think from my research that the other guy has actually got closer, because you were out of town..." We worked through these stories which was very unjournalistic. I wasn't just taking their reportages, I was trying to figure out where to put the puzzle together.
In many cases - not so much with the SoundDroid (also known as the ASP -Audio Signal Processor) - but very much so with Pixar, there's lot of people involved in the story. I would come up with an answer to a scenario of what happened in the seventies and eighties and eventually all these people would agree, but it's not always the way any one person remembered it. George Lucas asked me not to trust written reports of what happened from newspaper articles, but only direct interviews from primary sources. If you have an article, you have to go and figure out if it really happened or not. The journalist is different from an historian. For a while I felt like a journalist, but by the end of the book I felt like an historian. That was an interesting process for me.

MM: Did you join "The Droid Works" to introduce the EditDroid and SoundDroid post production tools to Hollywood?
MR: I worked between the engineers and the editors. The salesmen were helped to convince people to use a “droid” if someone else was using them. The engineers knew about editing, but they didn't always know what was going on in Hollywood. I sat in the middle of that. On the SoundDroid I sat with Stevie Wonder, The Grateful Dead, Barbara Streisand. These people came to see it and I demonstrated to them. I had that kind of role. I was a teacher of the editors, a support for them. I learned a lot about what a computer could do. I was constantly reporting back to engineering about the user interface and what it needed to do, what was lacking and I helped prioritize what we would fix, what was most important. Sales need to do this to sell to a client, editor need to do that or they wouldn’t be able finish the project. If you don't finish the project you'll never sell any more systems, but e.g. we're still in the middle of the project, they try to make a sell sale today and we need the money from the sale to help pay for the engineering. So how do you prioritize? It's tricky. I wasn't the final decider, but I would help in those debates. I continued in that role for many years, alternating between training, demonstrating, product definition, and editing myself. Sometimes people would have a smaller project, and they needed someone who could cut their commercial, low-budget feature film, or music video. Sometimes they brought an editor and I trained him. They were usually happy when I cut it, but at that time, at 24 years old, I was new and young, with little experience. I was an apprentice to Gabriella Cristiani and other fine editors, I learned about editing very quickly from watching them. I also learned a lot about user interface design, which helped me later in other types of work. It was a great job and a magical time, the 80's in this all-new field.

MM: How did movie people perceive the experience of navigating in a nonlinear medium rather than an analog linear one in the early days of digital revolution? Were they scared about those new technologies?
MR: They were unhappy. There was usually a silence for a while. On the EditDroid I was usually editing the scene and they were seated watching me edit. Finding a shot for me was to press a button and it was on the screen. It was easy to pick up shots and move them around. They couldn't believe it. The SoundDroid was more far out. Never before could they see the sound! To see a waveform of sound had never been done before. We could zoom into a waveform, go down to the sample, look at it, zoom out. You literally looked and just said: "That's where the amplitude kicks up and where you cut it". In both cases people can't hide their excitement and their fear. It was a combination of feelings. In a demonstration the products looked good; the problems were often more in the production flow. I was not really showing the drawbacks of those expensive computers, they couldn't be aware of what it was not good at, what was hard to do. There's a process to get the images or sound into the computer, with practical limitations. But just in its functioning there were amazing wonderful things to show them. I was really so excited that they got excited, just all the fun things we could do. I truly challenged them, they hadn't seen anything work as fast. That's how people felt about it. I felt good with people who really cared about great tools.
Most people don't like changing the way they work. They've been doing something one way for their entire career. The cost is really high to suddenly work a different way. This makes them nervous. It would take almost ten more years before this type of equipment took over Hollywood. Some of them used that time to experiment with it, some of them decided to not try at that stage of their careers. I was waiting for the senior people to retire and junior people to come up.
It's hard to remember how scary the new technologies can feel for someone. We are good at computers now, but at that time no one had computers, no one had experience on computers. They routinely crashed, they were complicated, there wasn't the help menu, no manual, no internet. People called me at night all the time, because a workstation cutting motion pictures had to work all the time. I can't tell you how often it crashed halfway through a project. Twenty years ago there was often no backup, you lost everything.
[...] more frequently than anyone would have liked, the machines would crash. Not just a system crash, which required rebooting, but a corrupted hard disk. No matter how often you saved your work, there was nothing that could save you from the death of the disk. - excerpt from Droidmaker
Sometimes this computer was very sensitive. If you did something in the wrong order, it crashed. There's no logic for that. I was very good at using the system, I could keep it from crashing, I worked all the time. I knew how it was built and how to do things the right way.
When you teach someone, sometimes they will do the other way. When I worked with Gabriella Cristiani in Bernardo Bertolucci's The Sheltering Sky, I sat next to her many days of work, watching her hands. You're watching her cut, she's not asking, but you're going to see what she's doing. Somedays I sat watching her fingers and if she was about to press the wrong button, I stopped her gently. I didn't want to bother her, but at the same time I didn't want to crash the system. In these early days you had just to be like sidekick, trying to keep them from making the computer screw up. I was good at that (laugh).

MM: Did you coin the terms "non-linear editing", "random-access editing", "virtual" editing (
MR: I didn't invent the terms. A lot of companies were making this kind of equipment at that time and the sales people would use eventually these words. Sometimes they used all the words: "This is random-access virtual non linear editing system, with film style." I tried to figure out what each expression meant. Random access is really about finding stuff, it means that you access stuff in a non linear way. I want that shot and I don't have to go looking through a bunch of stuff. So I looked at the terms I'd been hearing from people and reading through magazines and I decided that the right term for what this kind of editing was non-linear. Everything else was useful in a certain way, but I decided to pick one term and I would use it and try to educate everybody.
By 1990 I wrote my first book for filmmakers called "Nonlinear", it was the right moment in time. It was still five years before nonlinear editing was widely adopted, but the book helped bring together video tape editors, who knew a lot about technology and nothing about cutting movies, film editors, who knew everything about cutting movies but nothing about video or computers and computer users, who are great with the equipment but they didn't know much about editing. It created the language for everybody to use and to help each other going into and through the nineties. So I didn’t invent the term, but I probably popularized it as defining these editing tools.

MM: What's your thought about physical and chemical-mechanical approach on video/sound editing (film, tape), compared to numbers as symbolic representation of images and sound?
MR: Film editors are like tailors, they make beautiful clothes and have a feeling for the material. They can feel the scene and they know that it's good. I did a lot of dark room (photography) and I loved the smell and feel of film. Something is lost when you lose the tactile nature of film and sound. You can do more with the computer and that's not a question. If you're a craftsman, an artist like a sculptor, and you like working clay, you can make beautiful things out of that. No one can tell you that doing a 3D model on a computer, which is way more flexible, is better than clay. You are good at clay. Something is lost, there's no doubt about it, when you leave clay behind. But computers are great and have other attributes. I was sad to facilitate the loss of the art form of cutting film. At the same time, you can't stop the advance of technology, pretend it wasn't going to happen. I thought the best we could do when we're promoting this new technology was to embrace what was going to be lost and the work on transferring the knowledge of the people on how to use it, teaching young people how to do it. I believe in apprenticeship, that's important.
[1 | 2 | 3 | Next]

Sunday, December 14, 2008

David Lynch on Sound

Via FilmInFocus, here's an excerpt from the interview with David Lynch included in the book Soundscape: The School of Sound Lectures 1998-2001 edited by Larry Sider. Since 1998, the School Of Sound has presented stimulating and provocative series of masterclasses by practitioners, artists and academics on the creative use of sound with image. At the School of Sound you will not learn about hardware or software. Directors, sound designers, composers, editors and theorists reveal the methods, theories and creative thinking that lie behind the most effective uses of sound and music.

SOS: A final word on the relationship between sound and image in cinema…?

DL: Sound is fifty percent of a film, at least. In some scenes it’s almost a hundred percent. It’s the thing that can add so much emotion to a film. It’s a thing that can add all the mood and create a larger world. It sets the tone and it moves things. Sound is a great “pull” into a different world. And it has to work with the picture – but without it you’ve lost half the film.

It’s so beautiful. It has to do with all the parts coming together in a correct way. And certain stories allow more to happen in terms of cinema than other stories. But with sequences paced correctly, and the sound and the picture working together, it becomes like music. It’s like a symphony where you are conducting with great musicians and everybody is working together. And the groundwork has to be set-up in a certain way because it slides into this thing where all you’ve done before is now the payoff. And, because of what’s gone before and the way it’s gone before, this payoff can be unbelievable. Everything is working together and it can transport you. It can give you a feeling that you can’t have in any other way. And it can introduce ideas that are so abstract that you’ve never thought of them or experienced them.

But it has to do with the way cinema can work, it’s really a rare event, because there’s not that many people experimenting with cinema – it’s gone down to telling a surface story. But there’s this form – people, the audience, now know the form. They know that there’s a certain amount of time spent introducing this, then there’s this, and then there’s the next part – so they feel the end coming. And so there’s not a lot of room for surprises. There are a lot of ways to make the cinema. There are stories that will change the form and those are the kinds of stories I really love. And these stories make you work with the sound. When it works, it’s a thrill, it’s a magical thing and it takes you to, you know, a higher place.



Friday, December 12, 2008

Chris Watson: Whispering in the leaves

Acclaimed British sound artist Chris Watson creates a sensory journey fusing wildlife and location recordings from the Americas. You can experience the soundtrack of the South American rainforest diffused through the tropical foliage of a botanical environment at Perth Planetarium Pyramid (13 February–8 March).


Saturday, December 06, 2008

Waveform Series by Sakurako Shimizu

Waveform Series is the laser-cut shape of waveforms in sound editing applications by japanese jewelry designer Sakurako Shimizu. She used some human sound sources such as yawn, atchoum, giggle, wow, and the sound of a church bell. And what about the "I do" Wedding Band?

Bell (Cuff)

Giggle - detail view (Necklace)