Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration

The OP-Z may be the hot digital synth of the moment, but it’s also the first consumer music instrument to have dedicated features for live visuals. And that starts with lighting (DMX) and 3D visuals (Unity 3D).

One of various surprises about the OP-Z launch is this: there’s a dedicated track for controlling DMX. That’s the MIDI-like protocol that’s an industry standard for stage lighting, supported by lighting instruments and light boards.

Not a whole lot revealed here, but you get the sense that Teenage Engineering are committed to live visual applications:

There’s also integration with Unity 3D, for 2D and 3D animations you can sequence. This integration relies on MIDI, but they’ve gone as far as developing a framework for MIDI-controlled animations. Since Unity runs happily both on mobile devices and beefy desktop rigs, it’s a good match both for doing fun things with your iOS display (which the OP-Z uses anyway), and desktop machines with serious GPUs for more advanced AV shows.

Check out the framework so far on their GitHub:

https://github.com/teenageengineering/videolab

We’ll talk to Teenage Engineering to find out more about what they’re planning here, because #createdigitalmotion.

https://teenageengineering.com/products/op-z

The post Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration appeared first on CDM Create Digital Music.

Vectors are getting their own festival: lasers and oscilloscopes, go!

It’s definitely an underground subculture of audiovisual media, but lovers of graphics made with vintage displays, analog oscilloscopes, and lasers are getting their own fall festival to share performances and techniques.

Vector Hack claims to be “the first ever international festival of experimental vector graphics” – a claim that is, uh, probably fair. And it’ll span two cities, starting in Zagreb, Croatia, but wrapping up in the Slovenian capital of Ljubljana.

Why vectors? Well, I’m sure the festival organizers could come up with various answers to that, but let’s go with because they look damned cool. And the organizers behind this particular effort have been spitting out eyeball-dazzling artwork that’s precise, expressive, and unique to this visceral electric medium.

Unconvinced? Fine. Strap in for the best. Festival. Trailer. Ever.

Here’s how they describe the project:

Vector Hack is the first ever international festival of experimental vector graphics. The festival brings together artists, academics, hackers and performers for a week-long program beginning in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18.

Vector Hack will allow artists creating experimental audio-visual work for oscilloscopes and lasers to share ideas and develop their work together alongside a program of open workshops, talks and performances aimed at allowing young people and a wider audience to learn more about creating their own vector based audio-visual works.

We have gathered a group of fifteen participants all working in the field from a diverse range of locations including the EU, USA and Canada. Each participant brings a unique approach to this exiting field and it will be a rare chance to see all their works together in a single program.

Vector Hack festival is an artist lead initiative organised with
support from Radiona.org/Zagreb Makerspace as a collaborative international project alongside Ljubljana’s Ljudmila Art and Science Laboratory and Projekt Atol Institute. It was conceived and initiated by Ivan Marušić Klif and Derek Holzer with assistance from Chris King.

Robert Henke is featured, naturally – the Berlin-based artist and co-founder of Ableton and Monolake has spent the last years refining his skills in spinning his own code to control ultra-fine-tuned laser displays. But maybe what’s most exciting about this scene is discovering a whole network of people hacking into supposedly outmoded display technologies to find new expressive possibilities.

One person who has helped lead that direction is festival initiator Derek Holzer. He’s finishing a thesis on the topic, so we’ll get some more detail soon, but anyone interested in this practice may want to check out his open source Pure Data library. The Vector Synthesis library “allows the creation and manipulation of vector shapes using audio signals sent directly to oscilloscopes, hacked CRT monitors, Vectrex game consoles, ILDA laser displays, and oscilloscope emulation software using the Pure Data programming environment.”

https://github.com/macumbista/vectorsynthesis

The results are entrancing – organic and synthetic all at once, with sound and sight intertwined (both in terms of control signal and resulting sensory impression). That is itself perhaps significant, as neurological research reveals that these media are experienced simultaneously in our perception. Here are just two recent sketches for a taste:

They’re produced by hacking into a Vectrax console – an early 80s consumer game console that used vector signals to manipulate a cathode ray screen. From Wikipedia, here’s how it works:

The vector generator is an all-analog design using two integrators: X and Y. The computer sets the integration rates using a digital-to-analog converter. The computer controls the integration time by momentarily closing electronic analog switches within the operational-amplifier based integrator circuits. Voltage ramps are produced that the monitor uses to steer the electron beam over the face of the phosphor screen of the cathode ray tube. Another signal is generated that controls the brightness of the line.

Ted Davis is working to make these technologies accessible to artists, too, by developing a library for coding-for-artists tool Processing.

http://teddavis.org/xyscope/

Oscilloscopes, ready for interaction with a library by Ted Davis.

Ted Davis.

Here’s a glimpse of some of the other artists in the festival, too. It’s wonderful to watch new developments in the post digital age, as artists produce work that innovates through deeper excavation of technologies of the past.

Akiras Rebirth.

Alberto Novell.

Vanda Kreutz.

Stefanie Bräuer.

Jerobeam Fenderson.

Hrvoslava Brkušić.

Andrew Duff.

More on the festival:
https://radiona.org/
https://wiki.ljudmila.org/Main_Page

http://vectorhackfestival.com/

The post Vectors are getting their own festival: lasers and oscilloscopes, go! appeared first on CDM Create Digital Music.

Moving AV architectures of sine waves: Zeno van den Broek

Dutch-born, Danish-based audiovisual artist Zeno van den Broek continues to enchant with his immersive, minimalistic constructions. We talk to him about how his work clicks.

Zeno had a richly entrancing audiovisual release with our Establishment label in late 2016, Shift Symm. But he’s been prolific in his work for AV sound, with structures made of vector lines in sight and raw, chest rattling sine waves. It’s abstract an intellectual in the sense of there’s always a clear sense of form and intent – but it’s also visceral, both for the eyes and ears, as these mechanisms are set into motion, overlapping and interacting. They tug you into another world.

Zeno is joining a lineup of artists around our Establishment label tonight in Berlin – come round if you see this in time and happen to be in town with us.

But wherever you are, we want to share his work and the way he thinks about it.

CDM: So you’ve relocated from the Netherlands to Copenhagen – what’s that location like for you now, as an artist or individually?

Zeno: Yes, I’ve been living there for little over two years now; it’s been a very interesting shift both personally and workwise. Copenhagen is a very pleasant city to live in – it’s so spacious, green and calm. For my work, it took some more time to feel at home, since it’s structured quite differently from Holland, and interdisciplinary work isn’t as common as in Amsterdam or Berlin. I’ve recently joined a composers’ society, which is a totally new thing to me, so I’m very curious to see where this will lead in the future. Living in such a tranquil environment has enabled me to focus my work and to dive deeper into the concepts behind my work, it feels like a good and healthy base to explore the world from, like being in Berlin these days!

Working with these raw elements, I wonder how you go about conceiving the composition. Is there some experimentation process, adjustment? Do you stand back from it and work on it at all?

Well, it all starts from the concepts. I’ve been adapting the ‘conceptual art’ practise more and more, by using the ideas as the ‘engine’ that creates the work.

For Paranon, this concept came to life out of the desire to deepen my knowledge of sine waves and interference, which always play a role in my art but often more in an instinctive way. Before I created a single tone of Paranon I did more research on this subject and discovered the need for a structural element in time: the canon, which turned out to a very interesting method for structuring sine wave developments and to create patterns of interference that emerge from the shifting repetitions.

Based on this research, I composed canon structures for various parameters of my sine wave generators, such as frequency deviation and phase shifting, and movements of visual elements, such as lines and grids. After reworking the composition into Ableton, I pressed play and experienced the outcome. It doesn’t make sense to me to do adjustments or experiment with the outcome of the piece because all decisions have a reason, related to the concept. To me, those reasons are more important than if something sounds pleasant.

If I want to make changes, I have to go back to the concept, and see where my translation from concept to sound or image can be interpreted differently.

There’s such a strong synesthetic element to how you merge audio and visual in all your works. Do you imagine visuals as you’re working with the sound? What do they look like?

I try to avoid creating an image based on the sound. To me, both senses and media are equally important, so I treat them equally in my methods, going from concept to creation. Because I work with fundamental elements in both the visuals and the sound — such as sine waves, lines, grids, and pulses — they create strong relationships and new, often unexpected, results appear from the merging of the elements.

Can you tell us a bit about your process – and I think this has changed – in terms of how you’re constructing your sonic and visual materials?

Yes, that’s true; I’ve been changing my tools to better match my methods. Because of my background in architecture, drawing was always the foundation of my work — to form structures and concepts, but also to create the visual elements. My audiovisual work Shift Symm was still mainly built up out of animated vector drawings in combination with generative elements.

But I’ve been working on moving to more algorithmic methods, because the connection to the concepts feels more natural and it gives more freedom, not being limited by my drawing ability and going almost directly from concept to algorithm to result. So I’ve been incorporating more and more Max in my Ableton sets, and I started using [Derivative] TouchDesigner for the visuals. So Paranon was completely generated in TouchDesigner.

You’ve also been playing out live a lot more. What’s evolving as you perform these works?

Live performances are really important to me, because I love the feeling of having to perform a piece on exactly that time and place, with all the tension of being able to f*** it up — the uncompromising and unforgiving nature of a performance. This tension, in combination with being able to shape the work to the acoustics of the venue, make a performance into something much bigger than I can rationally explain. It means that in order to achieve this I have to really perform it live: I always give myself the freedom to shape the path a performance takes, to time various phrases and transitions and to be able to adjust many parameters of the piece. This does give a certain friction with the more rational algorithmic foundation of the work but I believe this friction is exactly what makes a live performance worthwhile.

So on our release of yours Shift Symm, we got to play a little bit with distribution methods – which, while I don’t know if that was a huge business breakthrough, was interesting at least in changing the relationship to the listener. Where are you currently deploying your artwork; what’s the significance of these different gallery / performance / club contexts for you?

Yes our Shift Symm release was my first ‘digital only’ audiovisual release; this new form has given me many opportunities in the realm of film festivals, where it has been screened and performed worldwide. I enjoy showing my work at these film festivals because of the more equal approach to the sound and image and the more focused attention of the audience. But I also enjoy performing in a club context a lot, because of the energy and the possibilities to work outside the ‘black box’, to explore and incorporate the architecture of the venues in my work.

It strikes me that minimalism in art or sound isn’t what it once was. Obviously, minimal art has its own history. And I got to talk to Carsten Nicolai and Olaf Bender at SONAR a couple years back about the genesis of their work in the DDR – why it was a way of escaping images containing propaganda. What does it mean to you to focus on raw and abstract materials now, as an artist working in this moment? Is there something different about that sensibility – aesthetically, historically, technologically – because of what you’ve been through?

I think my love for the minimal aesthetics come from when I worked as an architect in programs like Autocad — the beautiful minimalistic world of the black screen, with the thin monochromatic lines representing spaces and physical structures. And, of course, there is a strong historic relation between conceptual art and minimalism with artists like Sol LeWitt.

But to me, it most strongly relates to what I want to evoke in the person experiencing my work: I’m not looking to offer a way to escape reality or to give an immersive blanket of atmosphere with a certain ambiance. I’m aiming to ‘activate’ by creating a very abstract but coherent world. It’s one in which expectations are being created, but also distorted the next moment — perspectives shift and the audience only has these fundamental elements to relate to which don’t have a predefined connotation but evoke questions, moments of surprise, and some insights into the conceptual foundation of the work. The reviews and responses I’m getting on a quite ‘rational’ and ‘objective’ piece like Paranon are surprisingly emotional and subjective, the abstract and minimalistic world of sound and images seemingly opens up and activates while keeping enough space for personal interpretation.

What will be your technical setup in Berlin tonight; how will you work?

For my Paranon performance in Berlin, I’ll work with custom-programmed sine wave generators in [Cycling ’74] Max, of which the canon structures are composed in Ableton Live. These structures are receive messages via OSC and audio signal is sent to TouchDesigner for the visuals. On stage, I’m working with various parameters of the sound and image that control fundamental elements of which the slightest alteration have a big impact in the whole process.

Any works upcoming next?

Besides performing and screening my audiovisual pieces such as Paranon and Hysteresis, I’m working on two big projects.

One is an ongoing concert series in the Old Church of Amsterdam, where the installation Anastasis by Giorgio Andreotta Calò filters all the natural light in the church into a deep red. In June, I’ve performed a first piece in the church, where I composed a short piece for organ and church bells and re-amplified this in the church with the process made famous by Alvin Lucier’s “I’m sitting in a room” — slowly forming the organ and bells to the resonant frequencies of the church. In August, this will get a continuation in a collaboration with B.J. Nilsen, expanding on the resonant frequencies and getting deeper into the surface of the bells.

The other project is a collaboration with Robin Koek named Raumklang: with this project, we aim to create immaterial sound sculptures that are based on the acoustic characteristics of the location they will be presented in. Currently, we are developing the technical system to realize this, based on spatial tracking and choreographies of recording. In the last months, we’ve done residencies at V2 in Rotterdam and STEIM in Amsterdam and we’re aiming to present a first prototype in September.

Thanks, Zeno! Really looking forward to tonight!

If you missed Shift Symm on Establishment, here’s your chance:

And tonight in Berlin, at ACUD:

Debashis Sinha / Jemma Woolmore / Zeno van den Broek / Marsch

http://zenovandenbroek.com

The post Moving AV architectures of sine waves: Zeno van den Broek appeared first on CDM Create Digital Music.

Speaking in signal, across the divide between video and sound: SIGINT

Performing voltages. The notion is now familiar in synthesis – improvising with signals – but what about the dance between noise and image? Artist Oliver Dodd has been exploring the audiovisual modular.

Integrated sound-image systems have been a fascination of the avant-garde through the history of electronic art. But if there’s a return to the raw signal, maybe that’s born of a desire to regain a sense of fusion of media that can be lost in overcomplicated newer work.

Underground label Detroit Underground has had one foot in technology, one in audiovisual output. DU have their own line of Eurorack modules and a deep interest in electronics and invention, matching a line of audiovisual works. And the label is even putting out AV releases on VHS tape. (Well, visuals need some answer to the vinyl phonograph. You were expecting maybe laserdiscs?)

And SIGINT, Oliver Dodd’s project, is one of the more compelling releases in that series. It debuted over the winter, but now feels a perfect time to delve into what it’s about – and some of Oliver’s other, evocative work.

First, the full description, which draws on images of scanning transmissions from space, but takes place in a very localized, Earthbound rig:

The concept of SIGINT is based on the idea of scanning, searching, and recording satellite transmissions in the pursuit of capturing what appear to be anomalies as intelligent signals hidden within the transmission spectrum.

SIGINT represents these raw recordings, captured in their live, original form. These audio-video recordings were performed and rendered to VHS in real-time in an attempt to experience, explore, decipher, study, and decode this deeply evocative, secret, and embedded form of communication whose origins appear both alien and unknown, like paranormal imprints or reflections of inter-dimensional beings reflected within the transmission stream.

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The Modular Audio/Video system allows a direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each. The modular system used for SIGINT was one 6U case of only Industrial Music Electronics (Harvestman) modules for audio and one 3U case of LZX Industries modules for video.

Videos:

Album:

CDM: I’m going through all these lovely experiments on your YouTube channel. How do these experiments come about?

Oliver: My Instagram and YouTube content is mostly just a snapshot of a larger picture of what I am currently working on, either that day, or of a larger project or work generally, which could be either a live performance, for example, or a release, or a video project.

That’s one hell of an AV modular system. Can you walk us through the modules in there? What’s your workflow like working in an audiovisual system like this, as opposed to systems (software or hardware) that tend to focus on one medium or another?

It’s a two-part system. There is one part that is audio (Industrial Music Electronics, or “Harvestman”), and there is one part that is video (LZX Industries). They communicate with each other via control voltages and audio rate signals, and they can independently influence each other in both ways or directions. For example, the audio can control the video, and the control voltages generated in the video system can also control sources in the audio system.

Many of the triggers and control voltages are shared between the two systems, which creates a cohesive audio/video experience. However, not every audio signal that sounds good — or produces a nice sound — looks good visually, and therefore, further tweaking and conditioning of the voltages are required to develop a more cohesive and harmonious relationship between them.

The two systems: a 3U (smaller) audio system on the left handles the Harvestman audio modules, and 6U (taller) on the right includes video processing modules from LZX Industries. Cases designed by Elite Modular.

I’m curious about your notion of finding patterns or paranormal in the content. Why is that significant to you? Carl Sagan gets at this idea of listening to noise in his original novel Contact (using the main character listening to a washing machine at one point, if I recall). What drew you to this sort of idea – and does it only say something about the listener, or the data, too?

Data transmission surrounds us at all times. There are always invisible frequencies that are outside our ability to perceive them, flowing through the air and which are as unobstructed as the air itself. We can only perceive a small fraction of these phenomena. There are limitations placed on our ability to perceive as humans, and there are more frequencies than we can experience. There are some frequencies we can experience, and there are some that we cannot. Perhaps the latter can move or pass throughout the range of perception, leaving a trail or trace or impressions on the frequencies that we can perceive as it passes through, and which we can then decode.

What about the fact that this is an audiovisual creation? What does it mean to fuse those media for a project?

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The modular audio/video system allows direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each.

And now, some loops…

Oliver’s “experiments” series is transcendent and mesmerizing:

If this were a less cruel world, the YouTube algorithm would only feed you this. But in the meantime, you can subscribe to his channel. And ignore the view counts, actually. One person watching this one video is already sublime.

Plus, from Oliver’s gorgeous Instagram account, some ambient AV sketches to round things out.

More at: https://www.instagram.com/_oliverdodd/

https://detund.bandcamp.com/

https://detund.bandcamp.com/album/sigint

The post Speaking in signal, across the divide between video and sound: SIGINT appeared first on CDM Create Digital Music.

Inside a new immersive AV system, as Brian Eno premieres it in Berlin

“Hexadome,” a new platform for audiovisual performance and installation began a world-hopping tour with its debut today – with Brian Eno and Peter Chilvers the opening act.

I got the chance to go behind the scenes in discussion with the team organizing, as well as some of the artists, to try to understand both how the system works technically and what the artistic intention of launching a new delivery platform.

Brian Eno and Peter Chilvers present the debut work on the system – from earlier today. Photo courtesy ISM.

It’s not that immersive projection and sound is anything new in itself. Even limiting ourselves to the mechanical/electronic age, there’s of course been a long succession of ideas in panoramic projection, spatialized audio, and temporary and permanent architectural constructions. You’ve got your local cineplex, too. But as enhanced 3D sound and image is more accessible from virtual and augmented reality on personal devices, the same enhanced computational horsepower is also scaling to larger contexts. And that means if you fancy a nice date night instead of strapping some ugly helmet on your head, there’s hope.

But if 3D media tech is as ubiquitous and your phone, cultural venues haven’t kept up. Here in Germany, there are a number of big multichannel arrays. But they’ve tended to be limited to institutions – planetariums, academies, and a couple of media centers. So art has remained somewhat frozen in time, to single cinematic projections and stereo sound. The projection can get brighter, the sound can get louder, but very often those parameters stay the same. And that keeps artists from using space in their compositions.

A handful of spaces are beginning to change that around the world. An exhaustive survey I’ll leave for another time, but here in Germany, we’ve already got the Zentrum für Kunst und Medien Karlsruhe (ZKM) in Karlsruhe, and the 4DSOUND installation Monom in Berlin, each running public programs. (In 2014, I got to organize an open lab on the 4DSOUND while it was in Amsterdam at ADE, while also making a live performance on the system.)

The Hexadome is the new entry, launching this week. What makes it unique is that it couples visuals and sound in a single installation that will tour. Then it will make a round trip back to Berlin where the long-term plan is to establish a permanent home for this kind of work. It’s the first project of an organization dubbing itself the Institute for Sound and Music, Berlin – with the hope that name will someday grace a permanent museum dedicated to “sound, immersive arts, and electronic music culture.”

For now, ISM just has the Hexadome, so it’s parked in the large atrium of the Martin Gropius Bau, a respected museum in the heart of town.

And it’s launching with a packed program – a selection of installation-style pieces, plus a series of live audiovisual performances. On the live program:

Michael Tan’s design for CAO.

Holly Herndon & Mathew Dryhurst
Tarik Barri
Lara Sarkissian & Jemma Woolmore
Frank Bretschneider & Pierce Warnecke
Ben Frost & MFO
Peter van Hoesen & Heleen Blanken
CAO & Michael Tan
René Löwe & Pfadfinderei

Brian Eno’s installation launches a series of works that simply play back on the system, though the experience is still similar – you wander in and soak in projected images and spatial sound. The other artists all contributed installation versions of their work, plus a collaboration between Tarik Barri and Thom Yorke.

But before we get to the content, let’s consider the system and how it works.

Hexadome technology

The two halves of the Hexadome describe what this is – it’s a hexagonal projection arrangement, plus a dome-shaped sound array.

I spoke to Holger Stenschke, Lead Support Technician, from ZKM Karlsruhe, as well as Toby Götz, director of the Pfadfinderei collective. (Toby doubles up here, as the designer of the visual installation, and as one of the visual artists.) So they filled me in both on technical details and the intention of the whole thing.

Projection. The visuals are the simpler part to describe. You get six square projection screens, arranged in a hexagon, with large gaps in between. These are driven by two new iMacs Pro – that’s the current top-of-range from Apple as we launch – supplemented by still more external GPUs connected via Thunderbolt. MadMapper runs on the iMacs, and then the artists are free to fill all those pixels as needed. (Each screen is a little less than 4K resolution – so multiply that by six. Some shows will actually require both iMacs Pro.)

Jemma Woolmore shares this in-progress image of her visuals, as mapped to those six screens.

Sound. In the hemispherical sound array, there are 52 Meyer Sound speakers, arranged on a frame that looks a bit like a playground jungle gym. Why 52? Well, they’re arranged into a triangular tesselation around the dome. That’s not just to make this look impressive – that means that the sound dispersal from the speakers line up in such a way that you cover the space with sound.

The speakers also vary in size. There are three subwoofers, spaced around the hexagonal periphery, bigger speakers with more bass toward the bottom, and smaller, lighter speakers overhead. In Karlsruhe, where ZKM has a permanent installation, more of the individual speakers are bigger. But the Hexadome is meant to be portable, so weight counts. I can also attest from long hours experimenting on 4DSOUND that for whatever reason, lower frequency sounds seem to make more sense to the ears closer to the ground, and higher frequency sounds overhead. There’s actually no obvious reason for this – researchers I’ve heard who investigated how we localize sound find there’s no significant difference in how well we can localize across frequency range. (Ever heard people claim it doesn’t matter where you put a subwoofer? They’re flat out wrong.) So it’s more an expectation of experience than anything else, presumably. (Any psychoacoustics researchers wanting to chime in on comments, feel free.)

Audio interfaces. MOTU are all over this rig, because of AVB. AVB is the standard (IEEE, no less) for pro audio networking, letting you run sound over Ethernet connections. AVB audio interfaces from MOTU are there to connect to an AVB network that drives all those individual speakers.

Sound spatialization software. Visualists here are pretty much on their own – your job is to fill up those screens. But on the auditory side, there’s actually some powerful and reasonably easy to understand software to guide the process of positioning sound in space.

It’s actually significant that the Hexadome isn’t proprietary. Whereas the 4DSOUND system uses its own bespoke software and various Max patches, the Hexadome is built on some standard tools.

Artists have a choice between IRCAM’s Panoramix and Spat, and ZKM’s Zirkonium.

IRCAM Spat.

ZKM ZIrkonium – here, a screenshot of the work of Lara Sarkissian (in collaboration with Jemma Woolmore). Thanks, Lara, for the picture in progress! (The artists have been in residence at ZKM working away on this.)

On the IRCAM side, there’s not so much one toolchain as a bunch of smaller tools that work in concert. Panoramix is the standalone mixing tool an artist is likely to use, and it works with, for example, JACK (so you can pipe in sound from your software of choice). Then Spat comprises Max/MSP implementation of IRCAM spatialization, perception, and reverb tools. Panoramix is deep software – you can choose per sound source to use various spatialization techniques, and the reverb and other effects are capable of some terrific sonic effects.

Zirkonium is what the artists on the Hexadome seemed to gravitate toward. (Residencies at ZKM offered mentorship on both tools.) It’s got a friendly, single UI, and it’s free and open source. (Its sound engine is actually built in Pure Data.)

Then it’s a matter of whether the works are made for an installation, in which case they’re typically rendered (“freezing” the spatialization information) and played back in Reaper, or if they’re played live. For live performance, artists might choose to control the spatialization engine by sending OSC data, and using some kind of tool as a remote control (an iPad, for example).

I’ve so far only heard Brian Eno’s piece (both the sound check the other day and the installation), but the spatialization is already convincing. Spatialization will always work best when there are limited reflections from the physical space. The more reflected sound reaches your ear, the harder it is to localize the sound source. (The inverse is true, as well: the reason adding reverberation to a part of a mix seems to make it distance in the stereo field is, you already recognize that you hear more direct sound from sources that are close and more reflected sound from sources that are far away.)

Holger tells CDM that the team worked to mediate this effect by precisely positioning speakers in such a way that, once you’re inside the “dome” area, you hear mainly direct sound. In addition, a multichannel reverb like the IRCAM plug-in can be used to tune virtualized early reflections, making reverberation seem to emanate from beyond the dome.

In Eno’s work, at least, you have a sense of being enveloped in gong-like tones that emerge from all directions, across distances. You hear the big reverb tail of the building mixed in with that sound, but there’s a blend of virtual and real space – and there’s still a sense of precise distance between sounds in that hemispherical field.

That’s hard to describe in words, but think about the leap from mono to stereo. While mono music can be satisfying to hear, stereo creates a sense of space and makes individual elements more distinct. There’s a similar leap when you go to these larger immersive systems, and more so than the cartoonish effects you tend to get from cinematic multichannel – or even the wrap-around effects of four-channel.

What does it all mean?

Okay, so that’s all well and good. But everything I’ve described – multi-4K projection, spatial audio across lots of speakers – is readily available, with or without the Hexadome per se. You can actually go download Zirkonium and Panoramix right now. (You’ll need a few hundred euros if you want plug-in versions of all the fancy IRCAM stuff, but the rest is a free downloads, and ZKM’s software is even open source.) You don’t even necessarily need 50 speakers to try it out – Panoramix, for instance, lets you choose a binaural simulation for trying stuff out in headphones, even if it’ll sound a bit primitive by comparison.

The Hexadome for now has two advantages: one, this program, and two, the fact that it’s going mobile. Plus, it is a particular configuration.

The six square screens may at first seem unimpressive, at least in theory. You don’t get the full visual effect that you do from even conventional 180-degree panoramic projection, let alone the ability to fill your visual field as full domes or a big IMAX screen can do. Speaking to the team, though, I understood that part of the vision of the Hexadome was to project what Toby calls “windows.” And because of the brightness and contrast of each, they’re still stunning when you’re there with them in person.

This fits Eno’s work nicely, in that his collaboration with Peter Chilvers projects gallery-style images into the space, like slowly transforming paintings in light.

The gaps between the screens and above mean that you’re also aware of the space you’re in. So this is immersive in a different sense, which fits ISM’s goal of inserting these works in museum environments.

How that’s used in the other works, we’ll get to see. Projection it seems is a game of tradeoffs – domes give you greater coverage and real immersion, but distort images and also create reflections in both sound and light. (On the other hand, domes have also been in architectural use for centuries, as have rectangles, so don’t expect either to go away!)

The question “why” is actually harder to answer. There wasn’t a clear statement of mission from ISM and the Hexadome and its backers – this thing is what it is because they wanted it to be what it is, essentially. There’s no particular curatorial theme to the works. They touted some diversity of established and emerging artists. Though just about anyone may seem like they’re emerging next to Eno, that includes both local Berlin and international artists from Peru and the USA, among others, and a mix of ages and backgrounds.

The overall statement launching the Hexadome was therefore of a blank canvas, which will be approached by a range of people. And Eno/Chilvers made it literally seem a canvas, with brilliantly colored, color field-style images, filling the rectangles with oversized circles and strips of shifting color. Chilvers uses a slightly esoteric C++ raytracing engine, generating those images in realtime, in a digital, generative, modern take on the kind of effects found in Georges Seurat. Eno’s sounds were precisely what you’d expect – neutral chimes and occasional glitchy tune fragments, floating on their own or atop gentle waves of noise that arrive and recede like a tide. Organic if abstract tones resonated across the harmonic series, in groupings of fifths. Both image and sound are, in keeping with Eno’s adherence to stochastic ideas, produced in real-time according to a set of parameters, so that nothing ever exactly repeats. These are not ideas originated by Eno – stochastic processes and chance operations obviously have a long history – but as always, his rendition is tranquil and distinctively his own.

In a press conference this morning, Eno said he’d adjusted that piece until the one we heard was the fifth iteration – presumably an advantage of a system that’s controlled by a set of parameters. (Eno works with a set of software that allows this. You can try similar software, and read up on the history of the developers’ collaboration with the artist, at the Intermorphic site.)

What strikes me about beginning with Eno is that it sets a controlled tone for the installation. Eno/Chilvers’ aesthetic was at home on this system; the arrangement of screens fit the set of pictures, and Eno’s music is organic enough that when it’s projected into space, it seems almost naturally occurring.

And Eno in a press conference found a nice metaphor for justifying the connection of the Hexadome’s “windows” to the gallery experience. He noted that Chilvers’ subtly-shifting, ephemeral color compositions exploded the notion of painting or still image as something that could be consumed as a snapshot. Effectively, their work suggests the raison d’etre that the ISM curators seemed unable to articulate. The Hexadome is a canvas that incorporates time.

But that also raises a question. If spatial audio and immersive visuals have often been confined to institutions, this doesn’t so much liberate them as make a statement as to how museums can capitalize on deploying them. An inward-looking set of square images also seems firmly rooted in the museum frame (literally).

And the very fact that Eno’s work is so comfortable sets the stage for some interesting weeks.

Now we’ll see whether the coming lineup can find any subversive threads with the same setup, and in the longer run, what comes of this latest batch of installations. Will these sorts of setups incubate new ideas – especially as there’s a mix of artists and engineers engaged in the underlying tech? Or will spend-y installations like the Hexadome simply be a showy way to exhibit the tastes of big institutional partners? With some new names on that lineup for the coming weeks, I think we’ll at least get some different answers to where this could go. And looking beyond the Hexadome, the power of today’s devices to drive spatialization and visuals more easily means there’s more to come. Stay tuned.

Institute for Sound and Music, Berlin

Martin Gropius Bau: Program – ISM Hexadome

In coming installments, I’ll look deeper at some of these tools, and talk to some of the up-and-coming artists doing new things with the Hexadome.

The post Inside a new immersive AV system, as Brian Eno premieres it in Berlin appeared first on CDM Create Digital Music.

Let’s talk craft and vision in live audiovisual performance, media art

We’re gathering with top digital media artists this week – and you can tune in. Here’s a preview of their work, on the eve of Lunchmeat Festival, Prague.

Transmedia work and live visual performance exist at sometimes awkward intersections, caught between economies of the art world and music industry, between academia and festivals. They mix techniques and histories that aren’t always entirely compatible – or at least that can be demanding in combination. But the fields of media art and live visuals also represent areas of tremendous potential for innovation – where artists can explore immersive media, saturate senses, and apply buzzword-friendly technologies from AI to VR in experimental, surprising ways.

Our goal: bring together some artists for some deep discussion. And we have a great venue in which to do it. Prague’s Lunchmeat Festival has exploded on the international scene. Even sandwiched against Unsound Festival in Krakow and ADE in Amsterdam, it’s started to earn attention and big lineups, thanks to the intrepid work of an underground Czech collective. (The rest of the year, the Lunchmeat crew can usually be found doing installations and live visual club work of their own.)

Heck, even the fact that I’m stumbling over how to word this says something about the hybrid forms we’re describing, from live cinema to machine learning-infused art.

Since most of you won’t be in Prague this week, we’ll livestream and archive those conversations for the whole world.

Follow the event on Facebook for the schedule and add CDM to your Facebook likes to get a notification when our video starts, and stay tuned to CDM for the latest updates.

To whet your appetite (hopefully), here’s a look at the cast of characters involved:

Katerina Blahutova [DVDJ NNS]

Let’s start for a change with the home Prague team. Katerina is a great example of a new generation of artists coming from outside conventional pathways as far as discipline. She graduated in architecture and urbanism, then shifted that interest (consciously or otherwise) to transforming whole club and performance environments. She’s been a VJ and curator with Lunchmeat, designed releases and videos for Genot Centre (as well as graphic design for bands), then went on to co-found LOLLAB collective and tour with MIDI LIDI.

Don’t miss her poppy, saturated, post-Internet surrealism – hyperreality with concoctions of slime and object, opaque luminosities and lushly-colored, fragmented textures. (I can rip off this bit of the program; I wrote it originally!)

Oh yeah, and she made this nice teaser loop for this week’s festivities:

teaser loop from upcoming vj set for @malumzkole at @lunchmeat_cz #dvdjnns #wip

A post shared by Katla / DVDJ NNS (@katlanns) on

Ignazio Mortellaro [Stroboscopic Artefacts, Roots in Heaven]

Turn that saturation knob all the way down again, and step into the world of Stroboscopic Artefacts. Ignazio is the visual imagination behind all of that label’s distinctive look, from album design (as beautifully exhibited) to videos. He’ll be talking to us about that ongoing collaboration.

In addition, Ignazio is doing live visuals for a fresh project. Allow me to quote myself:

Roots in Heaven, a label owner and accomplished solo artist hidden behind a mesh mask and feathers, joins visualist Ignazio Mortellaro to present a new live audiovisual work. This comes on the heals of this year’s Roots in Heaven debut record “Petites Madeleines” (a Proust reference), out on K7! offshoot Zehnin. The result is a journey into “concentrated sensory impression” in sound, light, and sensation.

Gregory Eden [Clark]

One of the goals Lunchmeat’s curators and I discussed was elevating the visibility of people working on visual materials. But unlike the ‘front man’/’front woman’ role of a lot of the music artists, the position some of these people fill goes beyond just sole artist to broader management and production. Maybe that’s even more reason to pay attention to who they are and how they work.

Greg Eden, who’s at Lunchmeat with Clark, is a great example. With a university physics degree, he went on to Warp, where he developed Clark and Boards of Canada. He’s now full-time managing Clark, and in addition to that … uh, full time job … manages Nathan Fake (with visuals by Flat-e) and Gajek and Finn McNicholas.

Visuals are often synonymous with just “something on a projector,” live cinema-style. But Clark’s show is full-on stage show. For the stage adaptation of Death Peak, the artist works with choreographer Melanie Lane, dancers Kiani Del Valle and Sophia Ndaba, and lights from London’s Flat-E. Think of it as rave theater. That makes Greg’s role doubly interesting, as someone has to pull all of this together:

Novi_sad [with Ryoichi Kurokawa, SIRENS]

The collaboration between Novi_sad and Ryoichi Kurokawa is one of the more important ones of the moment, its nervous, quivering economic data visualization a fitting expression of our anxious zeitgeist. Here’s a glimpse of that work:

Ryoichi Kurokawa and Novi_sad have worked together to produce an audiovisual show in five etudes that produces a dramaturgy of data, weaving the numbers of the economic downturn into poignant, emotional narrative. Data and sound quiver and dematerialize in eerie, mournful tableaus, re-imagining the sound works of Richard Chartier, CM von Hausswolff, Jacob Kirkegaard, Helge Sten, and Rebecca Foon. Novi_sad is self-taught composer Thanasis Kaproulias, himself coming not only from the nation that has borne the brunt of Europe’s crisis, but holding a degree in economics. As a perfect foil to his sonic landscapes, Japan’s Ryoichi Kurokawa has made a name in expressive, exposed digital minimalism.

Marcel Weber (MFO) [Ben Frost] / Theresa Baumgartner [Jlin]

Ben Frost is already interesting from a collaborative standpoint, having worked with media like dance (Chunky Move, Wayne McGregor). The collaboration with MFO brings him together with one of Europe’s leading visual practitioners; Marcel will join us to talk about that but hopefully about his work for the likes of Berlin Atonal Festival, as well.

MFO has also designed the visuals for the sensational Jlin, but Theresa Baumgartner is touring with it – as well as working on production for Boiler Room. So, we have Theresa joining us from something of the in-the-trenches production perspective, as well.

Gene Kogan

VJing and live cinema are rooted in conventional compositing and processing. Even when they’re digital, we’re talking techniques mostly developed decades ago.

For something further afield, Gene Kogan will take us on a journey into deep generative work, machine learning and the new aesthetics that become possible with it. As AI begins to infuse itself with digital media, artists are indeed grappling with its potential. Gene is offering talks and workshops both here at Lunchmeat and at Ableton Loop next month, so now is a great time to check in with him. A bit about him:

Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code and art. Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.

I’ll be reviewing the resources he has for artists soon, too, so do stay tuned.

Gabriela Prochazka

Also coming from Prague, Gabriela has been guiding the INPUT program for Lunchmeat this fall, as well as being one of my collaborators (our installation is part of the exhibition this week). Its contents are mysterious so far, but a live AV work with Gabriela and Dné is also on tap.

See you in Prague or on the Internet, everyone!

Follow the event on Facebook for the schedule and add CDM to your Facebook likes to get a notification when our video starts, and stay tuned to CDM for the latest updates.

http://lunchmeatfestival.cz/2017/

The post Let’s talk craft and vision in live audiovisual performance, media art appeared first on CDM Create Digital Music.

Learn how to make trippy oscilloscope music with this video series

Call it stimulated synesthesia: there’s something really satisfying when your brain sees and hears a connection between image and sound. And add some extra magic when the image is on an oscilloscope. A new video series on YouTube shows you how to make this effect yourself.

Jerobeam Fenderson has begun a series on so-called “oscilloscope music.” The oscilloscope isn’t making the sounds – that’s opposite to how an oscilloscope works, as a signal visualization device. But by designing some nice reactive eye candy for the oscilloscope, then connecting some appropriate, edgy minimal music signal, you get, well – this:

Oooh, my, that’s tasty. Like biting into a big, juicy ripe [vegetarian version] tomato [meat-lovers version] raw steak.

So in the tutorial series, Fenderson clues us in to how he makes all this happen. And this could be an economical thing to play around with, as you’ll often find vintage oscilloscopes around a studio or on sale used.

Don’t miss the description on YouTube – there are tons of resources in there; it’s practically a complete bibliography on the topic in itself.

Part 2:

Plus, accompanying this series is an additional video and Max for Live patch demonstrating aliasing and sample rate, covered today on the CDM Newswire / Gear (our new home for breaking short-form news):
This Max for Live patch demonstrates critical digital audio concepts

More:
http://oscilloscopemusic.com/

The post Learn how to make trippy oscilloscope music with this video series appeared first on CDM Create Digital Music.

Bastl’s wild Peter Edwards softPop synth now in preorders

Okay, so we know you can keep remaking classic instruments and give people a good time. But what if you want something new and crazy? Can you bottle sonic weirdness and make it work for other people?

The first time I saw Peter Edwards play live was at an event we hosted in New York. He had a small box with a large spherical light on the top – and then proceeded to deafen and blind the audience in a maelstrom of noise and colored flashes.

The impressive thing about the softPop when you first play it is that it takes all that madness and makes it portable and eminently playable. You can crank it and make powerful noise. You can dial it into a sweet spot and get some grooving club-friendly acid basslines. You can dial it somewhere else, and get delicate watery bloops or alien speak.

And, while I may offend people here, I love the fact that you don’t necessarily need to know which fader you’re moving or what does what. So, sure, newcomers will be able to fiddle with the six faders and discover new sounds intuitively. But – let’s get real – that’s just as fun for experts, to have that feeling of unexpected sonic magic, that extrasensory experience of playing the instrument. And in even a short session at SuperBooth, that was unquestionably the impression I had of this instrument.

softPop represents years of Peter’s labor, culminating in a collaboration with Bastl Instruments and even a move to the Czech Republic. And while it was already an impressive evolution in Berlin this spring, it seems these crazy kids have continued the hard work of refining the box.

handle

What you get is a demonstration of how known ingredients can be combined in very new ways. It’s a bit like putting one really terrific analog patch in a lunchbox. So the two triangle-core oscillators are heavily feedbacked – the source of all the gorgeous sonic uncertainty – plus a filter and sample & hold. That’s already probably worth the price of admission, but there’s external signal processing, too, with envelope follower and sync. Plus you get a pattern generator so you can start crafting basslines and dances of noises right away, and a mini patch bay for semi-modular operation or patching to other gear.

And it’s eminently portable – batteries, built in speaker, and an optional wooden backplate that doubles as a carrying handle.

309 EUR (pre-tax). Preorder now to get the first back at end of August.

Oh yeah and — did we mention it’s also a light synth? There’s an RGB LED there for a miniature version of Peter’s light show. And don’t forget the “secret hack chamber.”

For anyone with the feeling the synth world has nothing new to offer – fear not, strange survives.

Specs:
fully analog core and signal path 
6 faders for controlling two VCOs and VCF and their cross modulations 
two wide range triangle-core VCOs 0 & 1 
quantizer for VCO 0 (auto-tuner) 
VCO 1 has variable waveshape via the modulation setting 
∞ resonant state variable VCF (bandpass, lowpass, highpass) 
external input with gain and envelope follower for intuitive sync of VCO 1 
track & hold circuit for stepped modulations 
looping pattern generator with two patterns P1 and P2 
RGB led for psychedelic experience 
25-point patchbay  
secret hack chamber at the back for adventurers 
aluminum body enclosure 
built-in speaker 
wooden handle backplate as accessory (sold separately) 

main

​http://www.bastl-instruments.com/instruments/softpop/

We’re watching for the powerful THYME processor, too; Bastl’s release of this notes that production delays mean that’ll shift sales to September.

The post Bastl’s wild Peter Edwards softPop synth now in preorders appeared first on CDM Create Digital Music.

Inside the transformational AV duo of Paula Temple and Jem the Misfit

Paula Temple and Jem the Misfit are working on the latest iteration of a project about transformation. It melts and fragments, crystallizes and forms, from its rich palette of hybridized techno and ambient textures, sonic and visual alike.

And now, it’s set to be involved in some way in transformation beyond just the confines of a single performance – as a statement about what society might do differently and how artists can contribute. With NODE Forum in Frankfurt am Main, Germany coming this weekend, the duo will premiere Nonagon II, a sequel to their stunning 2014 AV show in Amsterdam’s retina-popping EYE cinema (as one of the real highlights of that year’s Amsterdam Dance Event). They’re looking to extend a profound but sadly, rarely-seen collaboration into updated structures while engaging NODE’s activist theme, “Designing Hope.”

That makes for a perfect time for CDM to join the two together – Paul Temple, the techno legend (R&S Records) known for her brutal produtions, and Jem the Misfit, one of the top practitioners of live visual performance.

For reference, here’s a look at the previous iteration, though we’re keen to see the new evolution:

Jem the Misfit (aka Jemma Woolmore), left, with Paula Temple, right.

Jem the Misfit (aka Jemma Woolmore), left, with Paula Temple, right.

CDM: First, I think from an AV standpoint, it’s really significant that you’re together on stage. Obviously that sends a message to the audience, but what does it mean for playing together? Are you communicating there – even if just by your presence?

Jem: Paula and I work closely together before and during the show. Being on stage physical is really important for timing and connection in the performance; we give each other verbal cues, but also react with our body language. We also work closely together before the show, practicing and discussing the ideas and flow of the performance. It is also important that we are both onstage to highlight that this is a collaboration between two artist working together to build the show.

Jemma, it feels like what you’re doing is really cinematic, but it also breaks up that rectangle (with geometries, etc.). What’s your approach to the screen here? Of course, in the first version, you were in an actual cinema – where might this go in future?

Jem: Breaking the regular rectangle of the screen is something I try to achieve in all my performances. With the Nonagon show, I have a clear geometric language built around the nine-sided nonagon form and I construct abstract forms using MadMapper to translate the visuals through these geometries. As you say, the Nonagon show is highly cinematic and was originally designed for a cinema context for our show at The Eye in Amsterdam. For Nonagon II at NODE, I am using a little less of the Nonagon geometries and instead moving from these fixed, tight geometries, eventually breaking their borders and allowing the visuals to flow across the screen as the show develops. I am also interested in putting emphasis on light intensity and color to influence mood in this version of the show. In future iterations I could envisage this leading to more development in using lighting as well as video and bringing the geometries off the main screen and out into 3D space.

paula

Paula, this is a different sound world than a lot of people know from you. Is there a connection to the techno productions they may know better? Does that impact the approach to timbre, to rhythm?

Paula: I think it is the same sound world, just not as strictly dance floor-aimed. But I know what you mean, it even surprises me how people who follow my music easily recognizes my style in my more experimental live sets. It is one reason why I prefer to perform the experimental sets at festivals such as UNSOUND or INTONAL or the NONAGON II AV at NODE; the crowd knows my music more like an emotional expression and can therefore connect to the music beyond a released piece of music. There’s still recognizable elements, like from my track called Deathvox. When I’m producing I never consciously think about timbre or rhythm — that way of thinking is too detached. I’m feeling emotionally, I’m opening my sensory gating channels, connecting feelings into electronic sound without thinking too technically, and therefore being deeply immersed in that state to give a translation of those emotions through sound. People who really like my music seem to be tuned into that state too.

https://soundcloud.com/paulatemple/deathvox-deathvox-ep [embedding not allowed here]

Can you tell us a bit about the sound world here? What are its sources; how was it produced?

Paula: The sources to me are the thoughts and feeling that develop into these pieces. Lately, they have come from reflecting on social injustices happening and dystopian dreams, or even falling asleep to movies and waking up at a scary moment!

For example, one track has a working title called “Earth,” where I would have a recurring dream where everything green — plants, trees, vegetables — turns black and dies within seconds, and Earth is so hurt, so angry at what we humans have done, that Earth asks the Sun for help and asks the Sun to eat Earth. I remember at the time of making “Earth,” I was trying to watch the movie Melancholia and as always, I fall asleep and then I’m waking up as the movie ends, still half asleep, wondering what’s happening!

When producing, I am working in Ableton Live, with customized drum devices I’ve developed in the last 3 years and jamming on my [Dave Smith Instruments] Oberheim OB-6 or a virtual instrument like Tension [in Ableton Suite].

You’ve changed the music here for this edition, I know. What’s new in this version?

Paula: We’ve decided to keep the remix I made for Fink in the show as the lyrics literally relate to hope, not giving up. Plus there are new pieces relating to what Jem has also been inspired by lately, such as corporate made environmental or socioeconomic regressions and aggression, Entanglement or the Angela Davies book Freedom is Constant Struggle.

Jemma, how did you work on the visual material; how was it influenced by that music? I know there was some shooting of stuff melting, but … how did that come about; where was the design intention on your side and how did you collaborate together on that?

Jem: For the original Nonagon show, Paula and I developed the music and visuals in tandem, based around a common structure that included working in 9 parts and using 9 specific actions (such as distort, reverse, stretch etc) to apply visually or musically. This lead me to find ways of manipulating form both in virtual space but also using real forms, as you say, building and melting geometric objects and capturing this in time-lapse. So visually, Nonagon was about applying these specific actions to geometries and moving through a exploration of form, in connection with Paula also manipulating her sound in similar ways.

In Nonagon II, the focus has shifted from purely formal aims to more specific thematic ideas. When NODE approached me about performing at the festival, their theme ‘Designing Hope’ really caught me as a challenge, and I knew Paula would also be interested in tackling this theme. When I contacted Paula about NODE, we both agreed that we should shift the focus in Nonagon to try and address this idea of designing or generating hope through our performance – hence creating Nonagon II.

Our approach to the theme is that there can be no hope without action. So as well as Paula’s action to donate her fee to the charity Women in Exile, the new trajectory for Nonagon II is to move from a place of fear through to an empowering place of action. Through the show we transition from simplification to complexity, individuality to multiplicity, fear to action.

nonagon-ii_image1

nonagon-ii-planning-01

Visually, I am signifying this (again) through geometries that develop from simple shapes into complex systems, falling, melting and merging along the way, using color and light intensity to transform the emotional impact throughout the show.

Interestingly, in the time since we last worked together – which is over a year – Paula and I have found that our ideas and development in our work have followed similar processes and align in many areas. We have both independently decided to use the term ‘entanglement,’ this idea that everything is linked and that over-simplification of systems, ignoring their relationship to one another is incredibly dangerous – for instance, the supposed self-maintaining economic system championed by neo-liberalism, ignoring its entangled relationship with climate and natural resource systems. We also have both read Angela Davis book ‘Freedom is a constant struggle,’ which also talks about building connections across political movements and the importance of moving outside narrowly-defined communities and working together.

Also, the idea of acknowledging fragility in the balance of all our systems and having some humility in regard to our place in this universe has been important for both our practices.jemmisfit

Can you each describe a bit your live rig onstage? Now, presumably we’re meant to be watching the screen, not you two, but is it important for you to be able to make this a live improvisation?

For the visual set up, I am running Resolume [VJ/visual performance tool/media server] and MadMapper software, and using the Xone:K2 MIDI controller from Allen & Heath. There is no pre-programmed timeline in any of this setup, so it is all improvised. Paula and I like to practice the performance several times so that we have worked through the flow and impact of specific points in the show, but we are able to improvise fully making each performance unique.

Paula: My set up is simple — Ableton Live, Push 2 controller and Allen & Heath K2 controller. I care more about the music working succinctly with Jem’s visuals to encourage the audience to feel, to reflect within or get a sense taking some kind of positive action, than about making it a live improvisation.

01

“Designing Hope” is the theme of this year’s NODE. Paula, I understand you donated your fee – what’s your intention as far as doing something socially active, with this project, or with other projects?

Paula: Considering the theme ‘Designing Hope’ came the simple question to reflect on, who needs hope the most right now? Then looking at who locally is giving hope and I learned about Women in Exile, a non profit organization founded in 2002 by refugee women who work closely with refugee women in and around Brandenburg and Berlin.

In their activities, Women in Exile visit the refugee camps in Brandenburg to offer proactive support to refugee women from the perspective of those affected, to exchange information on what is going on and to gather information on the needs of women living in the camps. They organize seminars and workshops for refugee women in different topics on how to improve their difficult living situation and develop perspectives to fight for their rights in the asylum procedure and to defend themselves against sexualized/physical violence, discrimination and exclusion. They present the current issues, such as the hopelessness of deportation, to different organization nationwide in order to bring awareness to refugee women issues to the society. They give an incredible amount of energy and support to women whose world have turned upside down. Donating a fee is the least we could do. We hope, with our best intentions, is to invite others at the event to think about who are we designing hope for.

[Ed.: I’m familiar with this organization, too – you can find more or contact them directly:]

https://www.women-in-exile.net/
info[at]women-in-exile.net

What does it mean to be involved with NODE here, and with this community? (Realizing neither of us is a VVVV user, Jemma, but of course there’s more than that! Curious if that’s meaningful to you to be able to soak up some of that side of this, too.)

Jem: I think we are both excited about being involved at NODE this year and interacting with a community that is working at the intersection of technology and art as well as pushing ideas around how art/tech crossover can be used to inspire communities outside of art+tech. This is where I see our performance fitting even if we are not specifically using VVVV. Personally, I am looking forward to a few extra days at the festival and exploring the possibilities of VVVV, as well as meeting at the VVVV community and exploring possible crossovers in our work.

https://nodeforum.org/

http://jemthemisfit.com/

http://paulatemple.com/

The post Inside the transformational AV duo of Paula Temple and Jem the Misfit appeared first on CDM Create Digital Music.