Academia.edu

Saturday, October 31, 2009

University of Illinois Experimental Music Studios

The University of Illinois Experimental Music Studios were founded in 1958 by Lejaren Hiller and were the among the first of their kind in the western hemisphere. Faculty members and students working in these studios have been responsible for many of the developments in electroacoustic music over the years including the first developments in computer sound synthesis by Lejaren Hiller, the Harmonic Tone Generator by James Beauchamp, expanded gestural computer synthesis by Herbert Brün creation of the Sal-Mar Construction by Salvatore Martirano, and acousmatique sound diffusion/multi-channel sound immersive techniques researched and applied by Scott Wyatt in electroacoustic music and performance. Today the facility continues as an active and productive center for electroacoustic and computer music composition, education and research.


EMS - Experimental Music Studios | 50th Anniversary CD Set
Carla Scaletti: excerpt from Cyclonic (binaural mix of a multichannel version, 2008) [.mp3 - 8:22]

Taking its name from the rotational motion associated with powerful meteorological events, Cyclonic was inspired by the awesome power of the weather in east central Illinois and plays at the edges between events as recorded, events as experienced, events as remembered, and events as imagined.
Pitches were derived from the frequencies in the National Weather Service alert signal, and the concept of a Cycle is abstracted in various ways ranging from an endlessly accelerating pan to endless (cyclic) increases in the pitches of synthetically generated sirens and filterbanks processing synthetic wind.
Apart from rain, thunder, and wind sounds recorded in downtown Champaign, the entire piece was synthesized in Kyma.

[via ems.music.uiuc.edu]


Memories about the Experimental Music Studios by alumna Carla Scaletti, president of Symbolic Sound Corporation and designer of the Kyma language:

I came to Illinois because of a book I found in the Texas Tech University library: Music By Computers, edited by Heinz Von Foerster and James Beauchamp. I had known for several years that I wanted to make music with computers, so when I found this book and noticed that most of the authors were at the University of Illinois; I immediately applied and was accepted into the doctoral program. I arrived a week before classes to take the entrance exams and asked Scott Wyatt if I could help him get the studios ready for the fall semester and was delighted when he put me to work soldering patch cords.

Illinois was an environment where virtually everyone--whether faculty, student, or staff--was actively experimenting and creating software, hardware, and music. Faculty members did not act as mentors but as colleagues who, by actively engaging in their own creative work, served as examples of artists questioning the status quo and postulating alternative solutions. Sal Martirano was just learning to program in C in preparation for his YahaSalMaMac and would hold after-hours seminars on combinatorial pitch theory in his home studio where we read articles by Donald Martino over glasses of wine and freshly sliced watermelon seated right next to the SalMar Construction: one of the earliest examples of a complex system for music composition and digital sound synthesis. Herbert Brün written the beautifully algebraic SAWDUST language and was using it to compose I toLD YOU so. Jim Beauchamp had just finished the PLACOMP hybrid synthesizer, was doing research in time-varying spectral analysis of music tones, (and, contemporaneously with Robert Moog, had built one of the first voltage-controlled analog synthesizers: the Harmonic Tone Generator). Sever Tipei was writing his own stochastic composition software, and John Melby was using FORTRAN to manipulate/generate scores for Music 360 (the predecessor to C Sound). And in an abandoned World-War II radar research loft perched atop the Computer-based Education Research Laboratory (home of PLATO), Lippold Haken and Kurt Hebel were designing their own digital synthesizer (the IMS) that eventually evolved into a microcodable DSP (prior to the advent of the first Motorola 56000 DSP chip). The CERL Sound Group's LIME software was among the first music notation/printing programs; I saw it demonstrated at the annual Engineering Open House and asked if I could use it to print the score for my dissertation piece Lysogeny.

I practically begged Scott Wyatt to let me work as his graduate assistant in the Experimental Music Studios and, thanks to Scott, Nelson Mandrell and I had an opportunity to help build a studio: Studio D (at that time, the Synclavier Studio, now the studio where Kyma is installed), as well as experience the Buchla voltage controlled synthesizer and the joy of cutting & splicing tape. All of these experiences plus my explorations of Music 360, PLACOMP, and the CERL Sound Group's IMS and Platypus microcode, fed into the creation of Kyma. When my adviser John Melby won a Guggenheim award and took a year's leave of absence, I had the opportunity, as a visiting assistant professor, to teach his computer music course and to establish a Friday seminar series on computer music research.

Because the School of Music is part of a world-class university, Illinois afforded me opportunities for study and research that I would not have found elsewhere. It meant that I could play harp in the pit orchestra for an Opera Theatre production of Madame Butterfly and, the next day, run an experiment in Tino Trahiotis' psychoacoustics lab course in my minor, Speech and Hearing Science. It meant that I could go on tour with the New Music Ensemble led by Paul Zonn or David Liptak, that I could study mathematics, that I could do spectral analyses of the harp, that I could also get a degree in computer science and learn Smalltalk from Ralph Johnson after finishing my doctorate in music, and it meant that I could do some of the early work in data sonification with Alan Craig at the National Center for Supercomputing Applications. These experiences, along with the computer science courses in abstract data structures, computer languages, automata, and discrete mathematics, also fed into Kyma.

For me, Illinois was the perfect environment for exploration, and my work with Kyma is a direct outgrowth of those experiences as well as a continuation of several threads of interest that can be traced back to my graduate work at the University of Illinois.

While I was a graduate assistant, my office mates were Chuck Mason and Paul Koonce; the day that Chuck defended his dissertation and accepted a position in Birmingham, he taped a piece of notebook paper to the wall of our office with the heading "Famous Inhabitants of this Office" followed by all three of our names. I remember this act of optimism with great fondness, and I've heard that the list is still in the office (and that it has grown a lot longer by now).

Carla Scaletti, DMA in composition, University of Illinios
President, Symbolic Sound Corporation

Related Post: Herbert Brün

In the Master's Shadow: Hitchcock's Legacy

Alfred Hitchcock is one of the few filmmakers in history who may accurately be referred to as a cinema virtuoso. In short, he revolutionized the making of suspense films, playing the audience like a violin through the use of a well-timed sound effect… an almost unnoticed visual detail… the masterly construction of a suspense sequence. This documentary features filmmakers such as Martin Scorsese, William Friedkin, Guillermo del Toro, John Carpenter and Eli Roth on Hitchcock’s influence, why his movies continue to thrill audiences, and the ways in which his cinematic techniques have been embraced by directors to this day. Lavishly illustrated with film clips from throughout Hitchcock’s storied career, this film celebrates the enduring legacy of the man many consider the greatest filmmaker the medium has yet produced.





"I think he's certainly one of the great directors of all time and did something really hard to do, which made for a distinctive style that is hard to copy. People try to do Hitchcockian things. It's very hard. There's something distinctive about his personality. This quirky, dark sense of humor with a sense of romance and a sense of just devious, dark view of humanity, and you put it together into something pretty unique." - Gary Rydstrom


Video: In the Master's Shadow (1/3)
Video: In the Master's Shadow (2/3)
Video: In the Master's Shadow (3/3)

Sunday, October 11, 2009

Focus on Kyma International Sound Symposium - KISS09

These have been busy days here at NIU, especially for the speakers who showed to all the symposiast each method of working with Kyma.
A lot of participants came from different sides of the world, it was great to be immersed in this colourful community! Carla Scaletti and Kurt Hebel (the creators of Kyma) introduced their new hardware Paca(rana) - you can find a lot of information here @ SSC - and the latest features implemented in Kyma X.70 (where 'X' stands for the last letter in 'six' - 6.70).



Carla and Kurt did a wonderful job to develop a state-of-the-art "recombinant" workstation, which gives any user the freedom to explore unknown sonic territories without hitting the DSPs (there's plenty of headroom in processing power).
During the morning of the first day, Carla showed us how to control Kyma with external devices like the Wacom Tablet and the Continuum Fingerboard, then with Nintendo Wiimote+Nunchuck and SpaceNavigator (via OSCulator by Camille Troillard).
The AC Toolbox presentation by Hector Bravo-Benard, revealed to us an algorithmic approach to sound composition with Kyma. By using the LISP language, it is possible to structure an entire composition and have Kyma play it. It is possible to make use of strict algorithmic procedures like probability distributions, Markov chains, tendency masks, and many other techniques used by composers of the likes of Paul Berg, Gottfried Michael Koenig.
A special mention should be made for Bruno Liberda's lecture, where he explores subtle connections between his wonderful graphical scores and different kinds of ancient neumatic notation, making symbolic notation and symbolic sound computation converge.
Cristian Vogel explained his latest creation called Black Swan for swiss choreographer Gilles Jobin. The 50 minutes original Kyma score was composed in Geneva and at the electroacoustic studio of the Musique Inventives d'Annecy. Here's a Swiss TV documentary about the creation of Black Swan.

What is Kyma?

Kyma is a sound design language that permits to manipulate, transform and warp any kind of sound source in realtime, but we already know all of this.
During these days in Barcelona - at the first Kyma International Sound Symposium - we have learnt something more specific and, maybe, unexpected.
As Carla Scaletti said during the opening lecture of KISS09, "Recombinance Makes Us Human". So, what does this mean? Actually each of the participants brought their own experience, point of view, modus operandi and creative talent to the symposium. This multiform epiphany allowed everyone involved to share the same time and space for 2 days, exchanging their ideas like sharing bits of code in a recombinant manner.
The symposiasts all came from different fields where Kyma plays a fundamental role in each of their activities (sound designers, composers, DJs, etc.). Each of them knows and likes different specific aspects of Kyma.

The lectures performed during these 2 days taught us something: regardless of their own personal skills, and of their own degree of Kyma knowledge, observing things from other users' point of view reveals unexpected surprises and creates new connections, on a mental and operational level, which were never considered before.
Even sounds commonly used in our everyday activity, if put in a different context (artistically or technologically), appear to us as new and unexpected, bringing back to our memory John Cage's famous concept: "(You have) to look at a Coca-Cola bottle without the feeling that you've ever seen one before, as though you were looking at it for the very first time. That's what I'd like to find with sounds -- to play them and hear them as if you've never heard them before."

The musical performances have contributed themselves to recombine new pieces of code, bearing different morphologies, sintax and structures.
With her liveset "SlipStick for Continuum and Kyma", Carla Scaletti explores the sound phase-space, in order to find new timber paths moving from materic aggregation to resonant chimes and back, without forgoing a "virtuoso gestuality" given by the nature of the instrument itself.
We have experienced the possibility of creating vibrating soundtracks for silent movies in real time, thanks to the contribution of Franz Danksagmüller's work, with the help of Berit Barfred Jensen's voice.
Concrete sounds produced by the friction of a Cello bow on different-shaped polystyrene objects, saturate the listening space thanks to the work of Hector Bravo-Benard, reminding us of some textures of Xenakis' work "La legende d'Eer".
Marin Vrbica's "Tiger in the Jungle" immerses us into an exotic and rich spatial dimension made of revolving sound-surfaces and sliding glissando-like multiple trajectories.

In the end, we are surprised that the paradigma presented in the beginning by Carla Scaletti (Recombinance Makes Us Human), has actually acted as an invisible force guiding, transforming, rearranging our thoughts, knowledge and purposes.
So, after the symposium we observe ourselves in a different way. We have changed into new intellectual human beings, and this gave everyone of us a fresh new start, in order to discover new possible configurations and interactions between ourselves and the others, between what we are and what we will be until #KISS10.

We really enjoyed our first attempt to follow a live event via Twitter, we hope you appreciated our efforts to make you feel like you were in Barcellona with us, as part of the symposium.
You can track all of our tweets via search.twitter.com.

Here's a selection of related tweets of interests:
  • Now live #KISS09: Carla Scaletti - "Recombinance Makes Us Human" - Philosophy of Kyma #
  • a Question: Infinity makes us HUMANS? #
  • Infinity+ self-awareness of finitude #
  • Open-ended possibilities= open-ended amount of time, but the learning is fun! #
  • Kyma sounds are recombinant...like us..! #
  • Pen Morph Gandalf to Saruman "You are tracking the footsteps of two young Hobbits" #
  • Writing in time and space..... #
  • I became operational ath the C.E.R.L. lab in Urbana, Illinois on the 12th of November, 1986. -- Kyma #
  • Kyma was inspired by three ideas: cutting & splicing audio tape, voltage control, symbolic logic & computer programming #
  • An interview with Carla Scaletti @symbolicsound usoproject.com/C... #
  • and now...linear interpolation of presets - smooth transition in the phase space... #
  • music is something like a side effect of explorating timbre and sound configurations... #
  • Presets as aural Keyframes #
  • ...the problem of finding graphic symbols for the transposition of the composer's thought into sound." Edgard Varèse #
  • ...without friction [...] there would be no music!"- Carla Scaletti #
For anyone interested in the Kyma sound design language, the book "Kyma X Revealed" is available here @ SSC. This document gives you an overview of the environment, gives you a strategy for approaching this deep system one layer at a time and then walks you step-by-step through each level of Kyma like your own personal tour guide. At the back of this guide, you’ll find several summaries and quick references that you can keep handy for refreshing your memory while working. This book covers the mechanics of how to use Kyma — not the art and science of sound design. However, if you love sound and are hungry for more knowledge on the subject, achieving fluency with Kyma is an excellent first step!


A great thanks to Cristian Vogel (Station 55 Productions) and Symbolic Sound Corporation for their greeting and organization.

UPDATE: Slides from Recombinance Makes Us Human (.pdf), the welcoming address for the First International Kyma Symposium in Barcelona, October 2009.

Wednesday, October 07, 2009

An interview with Christian Zanesi, pt. 2

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo
(Continued from Page 1)


USO: Did mass-culture and consumerism create a lack of “listening attention”?

CZ: I think the ear is intact, it is the demand that has to be discovered and awaken. Even if there is a mass culture, to establish there is a dominant thought in hearing. But the border is very narrow, between the dominant thought and the adventure, we can cross this border very quickly, we only have to open ourselves and to create bridges. Also here there are very serious issues, for example the dominant idea of rock - let's take this example - has built a peculiar "ear" and we can recuperate this ear by using experimental musics. So the world is not made of cases separated by inviolable borders, on the contrary, there is no interruption of continuity between sounds; I am very attentive about popular music, because I know that there are cases of recycling, of experimentation, and also a genre which can be experimented. We have to be attentive, we have above all to avoid being dogmatic, fundamentalist or categorical. On the contrary, we have to observe what happens, try to detect in every practice, in every thing something different from the variable design of music, the variable design of humans and see how this can be transcended.



USO: What has been the role of physical space in the sound projection practice in GRM history?

CZ: In the 1950s, when Schaeffer invents this music, musique concrète with his collaborators, they suddenly asked themselves how to let everybody listen to this music in a public space, in a concert hall. Schaeffer said, if we have to go to the Carnegie Hall, how do we do, we cannot use a single loudspeaker on the scene, this was a little ridiculous, and finally the dimension of the concert, the dimension of how to present this music to the public was created, little by little; there is also the cinema industry which plays an extremely important role, since in the 1960s the Dolby firm had already imagined the surround, due to the invention of the Cinemascope, and were forced to position many loudspeakers to occupy the screen, and many more surround loudspeakers all around. So, we have at the same time the desire of people to experience sound in a bigger dimension, and the technology which built that ear. Today, we follow this process, and people is happy to come to a concert, because we think that a concert is going to be a particular and privileged moment, maybe a show, and that there is a difference between experiencing music in a concert hall and at home, that there is something really spectacular in that moment which justifies the concert. So there are many ways to face the problem, we chose one, but there are others.



USO: Can you describe your compositional process? How do you go from the idea to the sound, to the elaboration, to the structure, to the shape?

CZ: This is a personal question, if you ask this same question to another composer, the answers are different. Me, I don't go from idea to sound, but the opposite, I go from sound to idea; that is I need to find the sound that touches me for different reasons - if I had the time I would explain these reasons and give examples, but I don't - and inside this sound there is a potential, and the idea born from the sound is not a sound subjected to an idea, it is the opposite. We can consider the sound like some kind of fugue, which means that there is potential for development, and to work I need to find a sound which will be at the base because there is an emotive shock, there is really something imposing, which will be the source of my work, and at that moment we can listen to the sound in depth, not for what it is, but for what it can become; we hear the sound as a promising element of a musical structure. But this is a personal answer.


USO: Do you work with stereophonic material and perform live spazialization or do you prefer to design spatial trajectories/position in the studio?

CZ: 

I mainly work in stereo, because my studio is not multiphonic, but I have a lot of experience with sound projection in the auditoriums, with a stereo source you can give the illusion of a multiphonic work; I both do live concerts and acousmatic works. Finally you don't know what system we have, where it is, the space, and also the kind of concert we have been requested, but what we experience nowadays is that live practices and acousmatic practices are coexisting pacifically, and they enhance each other depending on the place, time, and project. The concrete music was born in 1950s, and it is about, after all these years, to create a new branch which will be the performance version, and the live version of this music. This evolution started in the year 2000, since today's systems (computers) are much faster than 20 years ago, and the tools for sound processing allow it. But these have developed because there was a strong desire to develop them. There are no technologies which arrive spontaneously, we look back into history, we observe, and what happens sometimes with dreams, there are dreams and conjunctions which make things possible. That's what it is going to happen in about 10 years.


USO: What are your working tools for non-linear editing and sound signal processing?

CZ: At the GRM there are more or less all programs - Pro Tools, Digital Performer - but there is a kind of consent on the tools to use, very quickly we can comment that today's programs are actually a direct-line consequence of the techniques from the 1950s, because with a tape or a disk we could go from a sound to another, create discontinuity, being able to superpose sounds, and create a verticality, and harmony; we have the mastery of time, thus on counterpoint and polyphonie, we could increase or decrease the volume; we need to check that there are the minimal conditions for the program to be exploitable, and finally, checking on today's programs; we have continuity. With the program, whatever that is, we can do anything - montage, superposition, regulate the volume, regulate the space and finally you do the same as it were a tape recorder. These are fundamental operations for the music's world.



[Prev | 1 | 2 ]

Special Thanks:
Laure de Lestrange
Christian Zanesi
www.ina.fr/grm

[Listen to Magnetic Landscapes - Christian Zanesi | via deezer.com]

Related Posts: Daniel Teruggi (GRM, Paris): The novelty of concrete music.

An interview with Christian Zanési, pt. 1

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo


Christian Zanési is a French composer and the head of the musical programming at the Groupe de Recherches Musicales. Zanési studied music at Université de Pau, then in Paris at the CNSMDP under Pierre Schaeffer and Guy Reibel. Since 1977, he is a member of GRM especially in charge of the production of radio programmes for France Musique and France Culture. He has been recently awarded the Special Qwartz at the Qwartz Electronic Music Awards (Edition 5).
We met Christian Zanési at 104 for the fifth Présences Électronique Festival, organized in co-production with Radio France, which explores the link between the concrete music of Schaeffer and new experiments in electronic music.
Since its founding in 1958, GRM has been a unique place for creation, research and conservation in the fields of electroacoustic music and recorded sound. The systematic analysis of the sound material Schaeffer proposed to his collaborators resulted in the publication of a seven hundred pages reference book, the "Treaty of Musical Objects" published in 1966.
Michel Chion’s Guide To Sound Objects (PDF, English translation) is a very useful introduction to this voluminous work (via ears.dmu.ac.uk).
In this very complete essay, Pierre Schaeffer develops the main part of his new musical theory of the sound object. It is based on two principal ideas, making and listening, and explores the first two levels of this theory: typo-morphology and classification of sounds. He uses various disciplines such as acoustics, semiotics and cognitive science to demonstrate his explanation of sound and, in particular, the musical object within musique concrète.


U.S.O. Project: You often mention that sound is the prime matter. How can we “orient” and recognize sounds we’ve already heard, without the cause-effect relationship?

Christian Zanési: The sound has to be considered in a musical context, it is not about recognizing the sound, it is an expressive matter, and every composer, every musician has some kind of personal sound, a signature sound, so when he transcribes it into his work, into an artistic project, the idea is not to try and play a guessing game, this is this sound, or this is that sound, there is a higher level that belongs to it, and this level is that of the musical relations, expressive relations. When you listen to a concert, for example for cello and orchestra, you are not pointing out each instant "this is a cello sound", you listen to music. And it is the same with sound.
 So, there is a second level which is a little more complex, it is true that some sounds reveal an image, from time to time there could be ambiguity, and this is also part of a rethoric, or of an ancient model, and it is possible that this provokes a kind of curiosity, and at the same time gets mixed into the expression. Composers are very strange.


USO: In the first 50 years of electronic music, has the absence of visual elements during performances limited or enhanced the listening experience?

CZ: Well, naturally us, at our festival Presences Electroniques, prefer to bring attention on the sound, sometimes there are some audiovisual experiences, but the nature, the physiology of it, makes the image a priority on the sound. So we, since we developed a projection system very very sophisticated, prefer that kind of listening experience, it is more complete, we don't need anything else, it is an orientation, it is one of the strengths of our festival; that is, to offer artists the best possible sound projection system, the best definition, and we think that is complete. We are very music-oriented, and I've often been there and I've seen musicians doing an audio-video performance, and I've been rarely convinced, sometimes yes, but it becomes something else, we go down to another level, and we want to explore the purely musical universe, sound and its programming.



USO: Can you give us a more in-depth definition of the concept of “l’ecoute reduit” (reduced listening) by Pierre Shaeffer?

CZ: The first time that... well in a few words it is difficult... when we recorded on vinyls, there were no tape recorders in the 1950s, when we reached the end of a disk, the last groove was repeated indefinitely, and it was a full modulation. In all vinyls today, there is a closed groove with a silence, to avoid that the stylus exits the turntable. All the disks at that time - all that were recorded in the 1950 - the disc, equivalent of today's vinyls, ended on a closed groove, and Schaeffer listened to this phenomenon, everybody could hear it, there was not a silence. The fact that the sound repeats itself, made him understand that finally we were listening to the sound itself, we wouldn't say "this sound is close or not, it starts this way, it is more or less acute, it is more or less brilliant, it is more or less thick, and in that moment he imagined that we could listen inside the recorded sound, which means we could verify, validate, objectivize the phenomenon by hearing the peculiar characteristics of the sound, how it starts and how it ends, in which tessitura it is, if it is light or if it is dark, it is loud, it is close or far. This is to objectivize the listen, this is "L'ecoute reduit".



USO: Can you discuss your vision of the sound organization principles? Definition of the sound object, objective and subjective aspects, articulation, iteration, quality of the information, musical language.

CZ: In classical music, the constituent elements of an opera function organically, that is there are consequences; when you put a sound together with another, this creates an harmony, for example a chord, and if you add another sound the chord completely changes in timbre, this is an organic principle. We can do the same with the sound the composer chooses, so to imagine music sometimes as a vertical relation, or as an horizontal relation; this is an organic system, and the organic system is a biological system, in some way copied from the living, and since we function following the living principles, we are particularly able and specialized to recognize an organic system, and in that moment something starts in our brains and there is some kind of a communion; this is what happens, how to organize the sound, why put this with that, and why after this, add that, they are organic relations.



USO: What do you think of hidden causality (between what the audience sees and hears)?

CZ: The ability to listen is a very complex phenomenon, so when a composer works, it is difficult to be conscious about everything, an artist works very much in an intuitive manner, intuition is a super-computer which gives us the answer immediately. But if we talk about the genesis, about why we made some choices, it takes an eternity. So, the relations, the sound is not inside a single "ear", there is a very ancient ear, for example, that is the ear we use to cross the street, it is the hearing of danger. Hearing is a prolongation of the sense of touch, imagine to be in the forest, when a predator hunts, if it touches you it is too late, you are dead. So hearing is a prolongation of touch, it allows you to foresee the arrival of a predator. There are various hearings - one which detects aesthetic information, one that detects information on the behavior of others, so when you work with sound you alternatively activate many "ears". When you begin a piece of music by having a sound turn around you, you awaken the ancient, the primitive "ear", the danger hearing, since in nature, something that turns around you it is something that tries to catch you, and a composer in some way is conscious of this mechanism and plays with it. It is very difficult to answer this question because the ear is very complex, and the reality around us is very complex. But we are all naturally gifted at hearing, since without it we could not cross the street, we could not hear something behind us, we could not see around us through sound.


[ 1 | 2 | Next ]

Tuesday, October 06, 2009

#KISS09: Kyma International Sound Symposium

Here's the hashtag to easily find and aggregate tweets related to the upcoming Kyma International Sound Symposium: #KISS09.
A hashtag is just a short character string preceded by a hash sign (#). You can find hashtags of interest via Twitter's own search function: see search.twitter.com.
You can choose to subscribe to the RSS feed for your favourite tagged Twitter updates, such as those that have been tagged with #KISS09. This means that you don't have to be a Twitter user to follow the conversation — it's visible to anyone.
That will send any new #KISS09-tagged updates from Twitter to your favourite news reader (e.g. Google Reader).

[RSS feed of usoproject's tweets]

Photos of the event will be available via Flickr (tagged with KISS09).

[RSS feed of the Flickr photoset]

[Facebook group for Kyma X users - by Cristian Vogel]
















Little by little, one travels far.
-- J. R. R. Tolkien