Immerse yourself in the full live AV concert by raster’s Belief Defect

Computer and modular machine textures collide with explosions of projected particles and glitching colored textures. Now the full concert footage of the duo Belief Defect (on Raster) is out.

It’s tough to get quality full-length live performance video – previously writing about this performance I had to refer to a short excerpt; a lot of the time you can only say “you had to be there” and point to distorted cell phone snippets. So it’s nice to be able to watch a performance end-to-end from the comfort of your chair.

Transport yourself to the dirigible-scaled hollowed-out power plant above Kraftwerk (even mighty Tresor club is just the basement), from Atonal Festival. It’s a set that’s full of angry, anxious, crunchy-distorted goodness:

(Actually even having listened to the album a lot, it’s nice to sit and retrace the full live set and see how they composed/improvised it. I would say record your live sets, fellow artists, except I know about how the usual Recording Curse works – when the Zoom’s batteries are charged up and the sound isn’t distorted and you remember to hit record is so often … the day you play your worst. They escaped this somehow.)

And Belief Defect represent some of the frontier of what’s possible in epic, festival mainstage-sized experimentalism, both analog and digital, sonic and visual. I got to write extensively about their process, with some support from Native Instruments, and more in-depth here:

BELIEF DEFECT ON THEIR MASCHINE AND REAKTOR MODULAR RIG [Native Instruments blog]

— with more details on how you might apply this to your own work:

What you can learn from Belief Defect’s modular-PC live rig

While we’re talking Raster label – the label formerly Raster-Noton before it again divided so Olaf Bender’s Raster and Carsten Nicolai’s Noton could focus on their own direction – here’s some more. Dasha Rush joined Electronic Beats for a rare portrait of her process and approach, including the live audiovisual-dance collaboration with dancer/choreographer Valentin Tszin and, on visuals, Stanislav Glazov. (Glazov is a talented musician, as well, producing and playing as Procedural aka Prcdrl, as well as a total Touch Designer whiz.)

And Dasha’s work, elegantly balanced between club and experimental contexts with every mix between, is always inspired.

Here’s that profile, though I hope to check in more shortly with how Stas and Valentin work with Kinect and dance, as well as how Stas integrates visuals with his modular sound:

The post Immerse yourself in the full live AV concert by raster’s Belief Defect appeared first on CDM Create Digital Music.

Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki

Complex music conjures up radical, fluid architectures, vivid angles – why not experience those spatial and rhythmic structures together? Here’s insight into a music video this week in which experimental turntablism and 3D graphics collide.

And collide is the right word. Sound and image are all hard edges, primitive cuts, stimulating corners.

Shiva Feshareki is a London-born composer and turntablist; she’s also got a radio show on NTS. With a research specialization in Daphne Oram (there’s a whole story there, even), she’s made a name for herself as one of the world’s leading composers working with turntables as medium, playing to the likes of the Royaal Albert Hall with the London Contemporary Orchestra. Her sounds are themselves often spatial and architectural, too – not just taking over art spaces, but working with spatial organization in her compositions.

That makes a perfect fit with the equally frenetic jump cuts and spinning 3D architectures of visualist Daniel James Oliver Haddock. (He’s a man with so many dimensions they named him four times over.)

NEW FORMS, her album on Belfast’s Resist label, explores the fragmented world of “different social forms,” a cut-up analog to today’s sliced-up, broken society. The abstract formal architecture, then, has a mission. As she writes in the liner notes: “if I can demonstrate sonically how one form can be vastly transformed using nothing other than its own material, then I can demonstrate this complexity and vastness of perspective.”

You can watch her playing with turntables and things around and atop turntables on Against the Clock for FACT:

And grab the album from Bandcamp:

Shiva herself works with graphical scores, which are interpreted in the album art by artist Helena Hamilton. Have a gander at that edition:

But since FACT covered the sound side of this, I decided to snag Daniel James Oliver Haddock. Daniel also wins the award this week for “quickest to answer interview questions,” so hey kids, experimental turntablism will give you energy!

Here’s Daniel:

The conception formed out of conversations with Shiva about the nature of her work and the ways in which she approaches sound. She views sound as these unique 3D structures which can change and be manipulated. So I wanted to emulate that in the video. I also was interested in the drawings and diagrams that she makes to plan out different aspects of her performances, mapping out speakers and sound scapes, I thought they were really beautiful in a very clinical way so again I wanted to use them as a staging point for the 3D environments.

I made about 6 environments in cinema 4d which were all inspired by these drawings. Then animated these quite rudimentary irregular polyhedrons in the middle to kind of represent various sounds.

Her work usually has a lot of sound manipulation, so I wanted the shapes to change and have variables. I ended up rendering short scenes in different camera perspectives and movements and also changing the textures from monotone to colour.

After all the Cinema 4d stuff, it was just a case of editing it all together! Which was fairly labour intensive, the track is not only very long but all the sounds have a very unusual tempo to them, some growing over time and then shortening, sounds change and get re-manipulated so that was challenging getting everything cut well. I basically just went through second by second with the waveforms and matched sounds by eye. Once I got the technique down it moved quite quickly. I then got the idea to involve some found footage to kind of break apart the aesthetic a bit.

Of course, there’s a clear link here to Autechre’s Gantz Graf music video, ur-video of all 3D music videos after. But then, there’s something really delightful about seeing those rhythms visualized when they’re produced live on turntables. Just the VJ in me really wants to see the visuals as live performance. (Well, and to me, that’s easier to produce than the Cinema 4D edits!)

But it’s all a real good time with at the audio/visual synesthesia experimental disco.

More:

Watch experimental turntablist Shiva Feshareki’s ‘V-A-C Moscow’ video [FACT]

https://www.shivafeshareki.co.uk/

https://resistbelfast.bandcamp.com/album/new-forms

Resist label

The post Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki appeared first on CDM Create Digital Music.

Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration

The OP-Z may be the hot digital synth of the moment, but it’s also the first consumer music instrument to have dedicated features for live visuals. And that starts with lighting (DMX) and 3D visuals (Unity 3D).

One of various surprises about the OP-Z launch is this: there’s a dedicated track for controlling DMX. That’s the MIDI-like protocol that’s an industry standard for stage lighting, supported by lighting instruments and light boards.

Not a whole lot revealed here, but you get the sense that Teenage Engineering are committed to live visual applications:

There’s also integration with Unity 3D, for 2D and 3D animations you can sequence. This integration relies on MIDI, but they’ve gone as far as developing a framework for MIDI-controlled animations. Since Unity runs happily both on mobile devices and beefy desktop rigs, it’s a good match both for doing fun things with your iOS display (which the OP-Z uses anyway), and desktop machines with serious GPUs for more advanced AV shows.

Check out the framework so far on their GitHub:

https://github.com/teenageengineering/videolab

We’ll talk to Teenage Engineering to find out more about what they’re planning here, because #createdigitalmotion.

https://teenageengineering.com/products/op-z

The post Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration appeared first on CDM Create Digital Music.

Vectors are getting their own festival: lasers and oscilloscopes, go!

It’s definitely an underground subculture of audiovisual media, but lovers of graphics made with vintage displays, analog oscilloscopes, and lasers are getting their own fall festival to share performances and techniques.

Vector Hack claims to be “the first ever international festival of experimental vector graphics” – a claim that is, uh, probably fair. And it’ll span two cities, starting in Zagreb, Croatia, but wrapping up in the Slovenian capital of Ljubljana.

Why vectors? Well, I’m sure the festival organizers could come up with various answers to that, but let’s go with because they look damned cool. And the organizers behind this particular effort have been spitting out eyeball-dazzling artwork that’s precise, expressive, and unique to this visceral electric medium.

Unconvinced? Fine. Strap in for the best. Festival. Trailer. Ever.

Here’s how they describe the project:

Vector Hack is the first ever international festival of experimental vector graphics. The festival brings together artists, academics, hackers and performers for a week-long program beginning in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18.

Vector Hack will allow artists creating experimental audio-visual work for oscilloscopes and lasers to share ideas and develop their work together alongside a program of open workshops, talks and performances aimed at allowing young people and a wider audience to learn more about creating their own vector based audio-visual works.

We have gathered a group of fifteen participants all working in the field from a diverse range of locations including the EU, USA and Canada. Each participant brings a unique approach to this exiting field and it will be a rare chance to see all their works together in a single program.

Vector Hack festival is an artist lead initiative organised with
support from Radiona.org/Zagreb Makerspace as a collaborative international project alongside Ljubljana’s Ljudmila Art and Science Laboratory and Projekt Atol Institute. It was conceived and initiated by Ivan Marušić Klif and Derek Holzer with assistance from Chris King.

Robert Henke is featured, naturally – the Berlin-based artist and co-founder of Ableton and Monolake has spent the last years refining his skills in spinning his own code to control ultra-fine-tuned laser displays. But maybe what’s most exciting about this scene is discovering a whole network of people hacking into supposedly outmoded display technologies to find new expressive possibilities.

One person who has helped lead that direction is festival initiator Derek Holzer. He’s finishing a thesis on the topic, so we’ll get some more detail soon, but anyone interested in this practice may want to check out his open source Pure Data library. The Vector Synthesis library “allows the creation and manipulation of vector shapes using audio signals sent directly to oscilloscopes, hacked CRT monitors, Vectrex game consoles, ILDA laser displays, and oscilloscope emulation software using the Pure Data programming environment.”

https://github.com/macumbista/vectorsynthesis

The results are entrancing – organic and synthetic all at once, with sound and sight intertwined (both in terms of control signal and resulting sensory impression). That is itself perhaps significant, as neurological research reveals that these media are experienced simultaneously in our perception. Here are just two recent sketches for a taste:

They’re produced by hacking into a Vectrax console – an early 80s consumer game console that used vector signals to manipulate a cathode ray screen. From Wikipedia, here’s how it works:

The vector generator is an all-analog design using two integrators: X and Y. The computer sets the integration rates using a digital-to-analog converter. The computer controls the integration time by momentarily closing electronic analog switches within the operational-amplifier based integrator circuits. Voltage ramps are produced that the monitor uses to steer the electron beam over the face of the phosphor screen of the cathode ray tube. Another signal is generated that controls the brightness of the line.

Ted Davis is working to make these technologies accessible to artists, too, by developing a library for coding-for-artists tool Processing.

http://teddavis.org/xyscope/

Oscilloscopes, ready for interaction with a library by Ted Davis.

Ted Davis.

Here’s a glimpse of some of the other artists in the festival, too. It’s wonderful to watch new developments in the post digital age, as artists produce work that innovates through deeper excavation of technologies of the past.

Akiras Rebirth.

Alberto Novell.

Vanda Kreutz.

Stefanie Bräuer.

Jerobeam Fenderson.

Hrvoslava Brkušić.

Andrew Duff.

More on the festival:
https://radiona.org/
https://wiki.ljudmila.org/Main_Page

http://vectorhackfestival.com/

The post Vectors are getting their own festival: lasers and oscilloscopes, go! appeared first on CDM Create Digital Music.

Moving AV architectures of sine waves: Zeno van den Broek

Dutch-born, Danish-based audiovisual artist Zeno van den Broek continues to enchant with his immersive, minimalistic constructions. We talk to him about how his work clicks.

Zeno had a richly entrancing audiovisual release with our Establishment label in late 2016, Shift Symm. But he’s been prolific in his work for AV sound, with structures made of vector lines in sight and raw, chest rattling sine waves. It’s abstract an intellectual in the sense of there’s always a clear sense of form and intent – but it’s also visceral, both for the eyes and ears, as these mechanisms are set into motion, overlapping and interacting. They tug you into another world.

Zeno is joining a lineup of artists around our Establishment label tonight in Berlin – come round if you see this in time and happen to be in town with us.

But wherever you are, we want to share his work and the way he thinks about it.

CDM: So you’ve relocated from the Netherlands to Copenhagen – what’s that location like for you now, as an artist or individually?

Zeno: Yes, I’ve been living there for little over two years now; it’s been a very interesting shift both personally and workwise. Copenhagen is a very pleasant city to live in – it’s so spacious, green and calm. For my work, it took some more time to feel at home, since it’s structured quite differently from Holland, and interdisciplinary work isn’t as common as in Amsterdam or Berlin. I’ve recently joined a composers’ society, which is a totally new thing to me, so I’m very curious to see where this will lead in the future. Living in such a tranquil environment has enabled me to focus my work and to dive deeper into the concepts behind my work, it feels like a good and healthy base to explore the world from, like being in Berlin these days!

Working with these raw elements, I wonder how you go about conceiving the composition. Is there some experimentation process, adjustment? Do you stand back from it and work on it at all?

Well, it all starts from the concepts. I’ve been adapting the ‘conceptual art’ practise more and more, by using the ideas as the ‘engine’ that creates the work.

For Paranon, this concept came to life out of the desire to deepen my knowledge of sine waves and interference, which always play a role in my art but often more in an instinctive way. Before I created a single tone of Paranon I did more research on this subject and discovered the need for a structural element in time: the canon, which turned out to a very interesting method for structuring sine wave developments and to create patterns of interference that emerge from the shifting repetitions.

Based on this research, I composed canon structures for various parameters of my sine wave generators, such as frequency deviation and phase shifting, and movements of visual elements, such as lines and grids. After reworking the composition into Ableton, I pressed play and experienced the outcome. It doesn’t make sense to me to do adjustments or experiment with the outcome of the piece because all decisions have a reason, related to the concept. To me, those reasons are more important than if something sounds pleasant.

If I want to make changes, I have to go back to the concept, and see where my translation from concept to sound or image can be interpreted differently.

There’s such a strong synesthetic element to how you merge audio and visual in all your works. Do you imagine visuals as you’re working with the sound? What do they look like?

I try to avoid creating an image based on the sound. To me, both senses and media are equally important, so I treat them equally in my methods, going from concept to creation. Because I work with fundamental elements in both the visuals and the sound — such as sine waves, lines, grids, and pulses — they create strong relationships and new, often unexpected, results appear from the merging of the elements.

Can you tell us a bit about your process – and I think this has changed – in terms of how you’re constructing your sonic and visual materials?

Yes, that’s true; I’ve been changing my tools to better match my methods. Because of my background in architecture, drawing was always the foundation of my work — to form structures and concepts, but also to create the visual elements. My audiovisual work Shift Symm was still mainly built up out of animated vector drawings in combination with generative elements.

But I’ve been working on moving to more algorithmic methods, because the connection to the concepts feels more natural and it gives more freedom, not being limited by my drawing ability and going almost directly from concept to algorithm to result. So I’ve been incorporating more and more Max in my Ableton sets, and I started using [Derivative] TouchDesigner for the visuals. So Paranon was completely generated in TouchDesigner.

You’ve also been playing out live a lot more. What’s evolving as you perform these works?

Live performances are really important to me, because I love the feeling of having to perform a piece on exactly that time and place, with all the tension of being able to f*** it up — the uncompromising and unforgiving nature of a performance. This tension, in combination with being able to shape the work to the acoustics of the venue, make a performance into something much bigger than I can rationally explain. It means that in order to achieve this I have to really perform it live: I always give myself the freedom to shape the path a performance takes, to time various phrases and transitions and to be able to adjust many parameters of the piece. This does give a certain friction with the more rational algorithmic foundation of the work but I believe this friction is exactly what makes a live performance worthwhile.

So on our release of yours Shift Symm, we got to play a little bit with distribution methods – which, while I don’t know if that was a huge business breakthrough, was interesting at least in changing the relationship to the listener. Where are you currently deploying your artwork; what’s the significance of these different gallery / performance / club contexts for you?

Yes our Shift Symm release was my first ‘digital only’ audiovisual release; this new form has given me many opportunities in the realm of film festivals, where it has been screened and performed worldwide. I enjoy showing my work at these film festivals because of the more equal approach to the sound and image and the more focused attention of the audience. But I also enjoy performing in a club context a lot, because of the energy and the possibilities to work outside the ‘black box’, to explore and incorporate the architecture of the venues in my work.

It strikes me that minimalism in art or sound isn’t what it once was. Obviously, minimal art has its own history. And I got to talk to Carsten Nicolai and Olaf Bender at SONAR a couple years back about the genesis of their work in the DDR – why it was a way of escaping images containing propaganda. What does it mean to you to focus on raw and abstract materials now, as an artist working in this moment? Is there something different about that sensibility – aesthetically, historically, technologically – because of what you’ve been through?

I think my love for the minimal aesthetics come from when I worked as an architect in programs like Autocad — the beautiful minimalistic world of the black screen, with the thin monochromatic lines representing spaces and physical structures. And, of course, there is a strong historic relation between conceptual art and minimalism with artists like Sol LeWitt.

But to me, it most strongly relates to what I want to evoke in the person experiencing my work: I’m not looking to offer a way to escape reality or to give an immersive blanket of atmosphere with a certain ambiance. I’m aiming to ‘activate’ by creating a very abstract but coherent world. It’s one in which expectations are being created, but also distorted the next moment — perspectives shift and the audience only has these fundamental elements to relate to which don’t have a predefined connotation but evoke questions, moments of surprise, and some insights into the conceptual foundation of the work. The reviews and responses I’m getting on a quite ‘rational’ and ‘objective’ piece like Paranon are surprisingly emotional and subjective, the abstract and minimalistic world of sound and images seemingly opens up and activates while keeping enough space for personal interpretation.

What will be your technical setup in Berlin tonight; how will you work?

For my Paranon performance in Berlin, I’ll work with custom-programmed sine wave generators in [Cycling ’74] Max, of which the canon structures are composed in Ableton Live. These structures are receive messages via OSC and audio signal is sent to TouchDesigner for the visuals. On stage, I’m working with various parameters of the sound and image that control fundamental elements of which the slightest alteration have a big impact in the whole process.

Any works upcoming next?

Besides performing and screening my audiovisual pieces such as Paranon and Hysteresis, I’m working on two big projects.

One is an ongoing concert series in the Old Church of Amsterdam, where the installation Anastasis by Giorgio Andreotta Calò filters all the natural light in the church into a deep red. In June, I’ve performed a first piece in the church, where I composed a short piece for organ and church bells and re-amplified this in the church with the process made famous by Alvin Lucier’s “I’m sitting in a room” — slowly forming the organ and bells to the resonant frequencies of the church. In August, this will get a continuation in a collaboration with B.J. Nilsen, expanding on the resonant frequencies and getting deeper into the surface of the bells.

The other project is a collaboration with Robin Koek named Raumklang: with this project, we aim to create immaterial sound sculptures that are based on the acoustic characteristics of the location they will be presented in. Currently, we are developing the technical system to realize this, based on spatial tracking and choreographies of recording. In the last months, we’ve done residencies at V2 in Rotterdam and STEIM in Amsterdam and we’re aiming to present a first prototype in September.

Thanks, Zeno! Really looking forward to tonight!

If you missed Shift Symm on Establishment, here’s your chance:

And tonight in Berlin, at ACUD:

Debashis Sinha / Jemma Woolmore / Zeno van den Broek / Marsch

http://zenovandenbroek.com

The post Moving AV architectures of sine waves: Zeno van den Broek appeared first on CDM Create Digital Music.

Speaking in signal, across the divide between video and sound: SIGINT

Performing voltages. The notion is now familiar in synthesis – improvising with signals – but what about the dance between noise and image? Artist Oliver Dodd has been exploring the audiovisual modular.

Integrated sound-image systems have been a fascination of the avant-garde through the history of electronic art. But if there’s a return to the raw signal, maybe that’s born of a desire to regain a sense of fusion of media that can be lost in overcomplicated newer work.

Underground label Detroit Underground has had one foot in technology, one in audiovisual output. DU have their own line of Eurorack modules and a deep interest in electronics and invention, matching a line of audiovisual works. And the label is even putting out AV releases on VHS tape. (Well, visuals need some answer to the vinyl phonograph. You were expecting maybe laserdiscs?)

And SIGINT, Oliver Dodd’s project, is one of the more compelling releases in that series. It debuted over the winter, but now feels a perfect time to delve into what it’s about – and some of Oliver’s other, evocative work.

First, the full description, which draws on images of scanning transmissions from space, but takes place in a very localized, Earthbound rig:

The concept of SIGINT is based on the idea of scanning, searching, and recording satellite transmissions in the pursuit of capturing what appear to be anomalies as intelligent signals hidden within the transmission spectrum.

SIGINT represents these raw recordings, captured in their live, original form. These audio-video recordings were performed and rendered to VHS in real-time in an attempt to experience, explore, decipher, study, and decode this deeply evocative, secret, and embedded form of communication whose origins appear both alien and unknown, like paranormal imprints or reflections of inter-dimensional beings reflected within the transmission stream.

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The Modular Audio/Video system allows a direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each. The modular system used for SIGINT was one 6U case of only Industrial Music Electronics (Harvestman) modules for audio and one 3U case of LZX Industries modules for video.

Videos:

Album:

CDM: I’m going through all these lovely experiments on your YouTube channel. How do these experiments come about?

Oliver: My Instagram and YouTube content is mostly just a snapshot of a larger picture of what I am currently working on, either that day, or of a larger project or work generally, which could be either a live performance, for example, or a release, or a video project.

That’s one hell of an AV modular system. Can you walk us through the modules in there? What’s your workflow like working in an audiovisual system like this, as opposed to systems (software or hardware) that tend to focus on one medium or another?

It’s a two-part system. There is one part that is audio (Industrial Music Electronics, or “Harvestman”), and there is one part that is video (LZX Industries). They communicate with each other via control voltages and audio rate signals, and they can independently influence each other in both ways or directions. For example, the audio can control the video, and the control voltages generated in the video system can also control sources in the audio system.

Many of the triggers and control voltages are shared between the two systems, which creates a cohesive audio/video experience. However, not every audio signal that sounds good — or produces a nice sound — looks good visually, and therefore, further tweaking and conditioning of the voltages are required to develop a more cohesive and harmonious relationship between them.

The two systems: a 3U (smaller) audio system on the left handles the Harvestman audio modules, and 6U (taller) on the right includes video processing modules from LZX Industries. Cases designed by Elite Modular.

I’m curious about your notion of finding patterns or paranormal in the content. Why is that significant to you? Carl Sagan gets at this idea of listening to noise in his original novel Contact (using the main character listening to a washing machine at one point, if I recall). What drew you to this sort of idea – and does it only say something about the listener, or the data, too?

Data transmission surrounds us at all times. There are always invisible frequencies that are outside our ability to perceive them, flowing through the air and which are as unobstructed as the air itself. We can only perceive a small fraction of these phenomena. There are limitations placed on our ability to perceive as humans, and there are more frequencies than we can experience. There are some frequencies we can experience, and there are some that we cannot. Perhaps the latter can move or pass throughout the range of perception, leaving a trail or trace or impressions on the frequencies that we can perceive as it passes through, and which we can then decode.

What about the fact that this is an audiovisual creation? What does it mean to fuse those media for a project?

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The modular audio/video system allows direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each.

And now, some loops…

Oliver’s “experiments” series is transcendent and mesmerizing:

If this were a less cruel world, the YouTube algorithm would only feed you this. But in the meantime, you can subscribe to his channel. And ignore the view counts, actually. One person watching this one video is already sublime.

Plus, from Oliver’s gorgeous Instagram account, some ambient AV sketches to round things out.

More at: https://www.instagram.com/_oliverdodd/

https://detund.bandcamp.com/

https://detund.bandcamp.com/album/sigint

The post Speaking in signal, across the divide between video and sound: SIGINT appeared first on CDM Create Digital Music.

Inside a new immersive AV system, as Brian Eno premieres it in Berlin

“Hexadome,” a new platform for audiovisual performance and installation began a world-hopping tour with its debut today – with Brian Eno and Peter Chilvers the opening act.

I got the chance to go behind the scenes in discussion with the team organizing, as well as some of the artists, to try to understand both how the system works technically and what the artistic intention of launching a new delivery platform.

Brian Eno and Peter Chilvers present the debut work on the system – from earlier today. Photo courtesy ISM.

It’s not that immersive projection and sound is anything new in itself. Even limiting ourselves to the mechanical/electronic age, there’s of course been a long succession of ideas in panoramic projection, spatialized audio, and temporary and permanent architectural constructions. You’ve got your local cineplex, too. But as enhanced 3D sound and image is more accessible from virtual and augmented reality on personal devices, the same enhanced computational horsepower is also scaling to larger contexts. And that means if you fancy a nice date night instead of strapping some ugly helmet on your head, there’s hope.

But if 3D media tech is as ubiquitous and your phone, cultural venues haven’t kept up. Here in Germany, there are a number of big multichannel arrays. But they’ve tended to be limited to institutions – planetariums, academies, and a couple of media centers. So art has remained somewhat frozen in time, to single cinematic projections and stereo sound. The projection can get brighter, the sound can get louder, but very often those parameters stay the same. And that keeps artists from using space in their compositions.

A handful of spaces are beginning to change that around the world. An exhaustive survey I’ll leave for another time, but here in Germany, we’ve already got the Zentrum für Kunst und Medien Karlsruhe (ZKM) in Karlsruhe, and the 4DSOUND installation Monom in Berlin, each running public programs. (In 2014, I got to organize an open lab on the 4DSOUND while it was in Amsterdam at ADE, while also making a live performance on the system.)

The Hexadome is the new entry, launching this week. What makes it unique is that it couples visuals and sound in a single installation that will tour. Then it will make a round trip back to Berlin where the long-term plan is to establish a permanent home for this kind of work. It’s the first project of an organization dubbing itself the Institute for Sound and Music, Berlin – with the hope that name will someday grace a permanent museum dedicated to “sound, immersive arts, and electronic music culture.”

For now, ISM just has the Hexadome, so it’s parked in the large atrium of the Martin Gropius Bau, a respected museum in the heart of town.

And it’s launching with a packed program – a selection of installation-style pieces, plus a series of live audiovisual performances. On the live program:

Michael Tan’s design for CAO.

Holly Herndon & Mathew Dryhurst
Tarik Barri
Lara Sarkissian & Jemma Woolmore
Frank Bretschneider & Pierce Warnecke
Ben Frost & MFO
Peter van Hoesen & Heleen Blanken
CAO & Michael Tan
René Löwe & Pfadfinderei

Brian Eno’s installation launches a series of works that simply play back on the system, though the experience is still similar – you wander in and soak in projected images and spatial sound. The other artists all contributed installation versions of their work, plus a collaboration between Tarik Barri and Thom Yorke.

But before we get to the content, let’s consider the system and how it works.

Hexadome technology

The two halves of the Hexadome describe what this is – it’s a hexagonal projection arrangement, plus a dome-shaped sound array.

I spoke to Holger Stenschke, Lead Support Technician, from ZKM Karlsruhe, as well as Toby Götz, director of the Pfadfinderei collective. (Toby doubles up here, as the designer of the visual installation, and as one of the visual artists.) So they filled me in both on technical details and the intention of the whole thing.

Projection. The visuals are the simpler part to describe. You get six square projection screens, arranged in a hexagon, with large gaps in between. These are driven by two new iMacs Pro – that’s the current top-of-range from Apple as we launch – supplemented by still more external GPUs connected via Thunderbolt. MadMapper runs on the iMacs, and then the artists are free to fill all those pixels as needed. (Each screen is a little less than 4K resolution – so multiply that by six. Some shows will actually require both iMacs Pro.)

Jemma Woolmore shares this in-progress image of her visuals, as mapped to those six screens.

Sound. In the hemispherical sound array, there are 52 Meyer Sound speakers, arranged on a frame that looks a bit like a playground jungle gym. Why 52? Well, they’re arranged into a triangular tesselation around the dome. That’s not just to make this look impressive – that means that the sound dispersal from the speakers line up in such a way that you cover the space with sound.

The speakers also vary in size. There are three subwoofers, spaced around the hexagonal periphery, bigger speakers with more bass toward the bottom, and smaller, lighter speakers overhead. In Karlsruhe, where ZKM has a permanent installation, more of the individual speakers are bigger. But the Hexadome is meant to be portable, so weight counts. I can also attest from long hours experimenting on 4DSOUND that for whatever reason, lower frequency sounds seem to make more sense to the ears closer to the ground, and higher frequency sounds overhead. There’s actually no obvious reason for this – researchers I’ve heard who investigated how we localize sound find there’s no significant difference in how well we can localize across frequency range. (Ever heard people claim it doesn’t matter where you put a subwoofer? They’re flat out wrong.) So it’s more an expectation of experience than anything else, presumably. (Any psychoacoustics researchers wanting to chime in on comments, feel free.)

Audio interfaces. MOTU are all over this rig, because of AVB. AVB is the standard (IEEE, no less) for pro audio networking, letting you run sound over Ethernet connections. AVB audio interfaces from MOTU are there to connect to an AVB network that drives all those individual speakers.

Sound spatialization software. Visualists here are pretty much on their own – your job is to fill up those screens. But on the auditory side, there’s actually some powerful and reasonably easy to understand software to guide the process of positioning sound in space.

It’s actually significant that the Hexadome isn’t proprietary. Whereas the 4DSOUND system uses its own bespoke software and various Max patches, the Hexadome is built on some standard tools.

Artists have a choice between IRCAM’s Panoramix and Spat, and ZKM’s Zirkonium.

IRCAM Spat.

ZKM ZIrkonium – here, a screenshot of the work of Lara Sarkissian (in collaboration with Jemma Woolmore). Thanks, Lara, for the picture in progress! (The artists have been in residence at ZKM working away on this.)

On the IRCAM side, there’s not so much one toolchain as a bunch of smaller tools that work in concert. Panoramix is the standalone mixing tool an artist is likely to use, and it works with, for example, JACK (so you can pipe in sound from your software of choice). Then Spat comprises Max/MSP implementation of IRCAM spatialization, perception, and reverb tools. Panoramix is deep software – you can choose per sound source to use various spatialization techniques, and the reverb and other effects are capable of some terrific sonic effects.

Zirkonium is what the artists on the Hexadome seemed to gravitate toward. (Residencies at ZKM offered mentorship on both tools.) It’s got a friendly, single UI, and it’s free and open source. (Its sound engine is actually built in Pure Data.)

Then it’s a matter of whether the works are made for an installation, in which case they’re typically rendered (“freezing” the spatialization information) and played back in Reaper, or if they’re played live. For live performance, artists might choose to control the spatialization engine by sending OSC data, and using some kind of tool as a remote control (an iPad, for example).

I’ve so far only heard Brian Eno’s piece (both the sound check the other day and the installation), but the spatialization is already convincing. Spatialization will always work best when there are limited reflections from the physical space. The more reflected sound reaches your ear, the harder it is to localize the sound source. (The inverse is true, as well: the reason adding reverberation to a part of a mix seems to make it distance in the stereo field is, you already recognize that you hear more direct sound from sources that are close and more reflected sound from sources that are far away.)

Holger tells CDM that the team worked to mediate this effect by precisely positioning speakers in such a way that, once you’re inside the “dome” area, you hear mainly direct sound. In addition, a multichannel reverb like the IRCAM plug-in can be used to tune virtualized early reflections, making reverberation seem to emanate from beyond the dome.

In Eno’s work, at least, you have a sense of being enveloped in gong-like tones that emerge from all directions, across distances. You hear the big reverb tail of the building mixed in with that sound, but there’s a blend of virtual and real space – and there’s still a sense of precise distance between sounds in that hemispherical field.

That’s hard to describe in words, but think about the leap from mono to stereo. While mono music can be satisfying to hear, stereo creates a sense of space and makes individual elements more distinct. There’s a similar leap when you go to these larger immersive systems, and more so than the cartoonish effects you tend to get from cinematic multichannel – or even the wrap-around effects of four-channel.

What does it all mean?

Okay, so that’s all well and good. But everything I’ve described – multi-4K projection, spatial audio across lots of speakers – is readily available, with or without the Hexadome per se. You can actually go download Zirkonium and Panoramix right now. (You’ll need a few hundred euros if you want plug-in versions of all the fancy IRCAM stuff, but the rest is a free downloads, and ZKM’s software is even open source.) You don’t even necessarily need 50 speakers to try it out – Panoramix, for instance, lets you choose a binaural simulation for trying stuff out in headphones, even if it’ll sound a bit primitive by comparison.

The Hexadome for now has two advantages: one, this program, and two, the fact that it’s going mobile. Plus, it is a particular configuration.

The six square screens may at first seem unimpressive, at least in theory. You don’t get the full visual effect that you do from even conventional 180-degree panoramic projection, let alone the ability to fill your visual field as full domes or a big IMAX screen can do. Speaking to the team, though, I understood that part of the vision of the Hexadome was to project what Toby calls “windows.” And because of the brightness and contrast of each, they’re still stunning when you’re there with them in person.

This fits Eno’s work nicely, in that his collaboration with Peter Chilvers projects gallery-style images into the space, like slowly transforming paintings in light.

The gaps between the screens and above mean that you’re also aware of the space you’re in. So this is immersive in a different sense, which fits ISM’s goal of inserting these works in museum environments.

How that’s used in the other works, we’ll get to see. Projection it seems is a game of tradeoffs – domes give you greater coverage and real immersion, but distort images and also create reflections in both sound and light. (On the other hand, domes have also been in architectural use for centuries, as have rectangles, so don’t expect either to go away!)

The question “why” is actually harder to answer. There wasn’t a clear statement of mission from ISM and the Hexadome and its backers – this thing is what it is because they wanted it to be what it is, essentially. There’s no particular curatorial theme to the works. They touted some diversity of established and emerging artists. Though just about anyone may seem like they’re emerging next to Eno, that includes both local Berlin and international artists from Peru and the USA, among others, and a mix of ages and backgrounds.

The overall statement launching the Hexadome was therefore of a blank canvas, which will be approached by a range of people. And Eno/Chilvers made it literally seem a canvas, with brilliantly colored, color field-style images, filling the rectangles with oversized circles and strips of shifting color. Chilvers uses a slightly esoteric C++ raytracing engine, generating those images in realtime, in a digital, generative, modern take on the kind of effects found in Georges Seurat. Eno’s sounds were precisely what you’d expect – neutral chimes and occasional glitchy tune fragments, floating on their own or atop gentle waves of noise that arrive and recede like a tide. Organic if abstract tones resonated across the harmonic series, in groupings of fifths. Both image and sound are, in keeping with Eno’s adherence to stochastic ideas, produced in real-time according to a set of parameters, so that nothing ever exactly repeats. These are not ideas originated by Eno – stochastic processes and chance operations obviously have a long history – but as always, his rendition is tranquil and distinctively his own.

In a press conference this morning, Eno said he’d adjusted that piece until the one we heard was the fifth iteration – presumably an advantage of a system that’s controlled by a set of parameters. (Eno works with a set of software that allows this. You can try similar software, and read up on the history of the developers’ collaboration with the artist, at the Intermorphic site.)

What strikes me about beginning with Eno is that it sets a controlled tone for the installation. Eno/Chilvers’ aesthetic was at home on this system; the arrangement of screens fit the set of pictures, and Eno’s music is organic enough that when it’s projected into space, it seems almost naturally occurring.

And Eno in a press conference found a nice metaphor for justifying the connection of the Hexadome’s “windows” to the gallery experience. He noted that Chilvers’ subtly-shifting, ephemeral color compositions exploded the notion of painting or still image as something that could be consumed as a snapshot. Effectively, their work suggests the raison d’etre that the ISM curators seemed unable to articulate. The Hexadome is a canvas that incorporates time.

But that also raises a question. If spatial audio and immersive visuals have often been confined to institutions, this doesn’t so much liberate them as make a statement as to how museums can capitalize on deploying them. An inward-looking set of square images also seems firmly rooted in the museum frame (literally).

And the very fact that Eno’s work is so comfortable sets the stage for some interesting weeks.

Now we’ll see whether the coming lineup can find any subversive threads with the same setup, and in the longer run, what comes of this latest batch of installations. Will these sorts of setups incubate new ideas – especially as there’s a mix of artists and engineers engaged in the underlying tech? Or will spend-y installations like the Hexadome simply be a showy way to exhibit the tastes of big institutional partners? With some new names on that lineup for the coming weeks, I think we’ll at least get some different answers to where this could go. And looking beyond the Hexadome, the power of today’s devices to drive spatialization and visuals more easily means there’s more to come. Stay tuned.

Institute for Sound and Music, Berlin

Martin Gropius Bau: Program – ISM Hexadome

In coming installments, I’ll look deeper at some of these tools, and talk to some of the up-and-coming artists doing new things with the Hexadome.

The post Inside a new immersive AV system, as Brian Eno premieres it in Berlin appeared first on CDM Create Digital Music.

Let’s talk craft and vision in live audiovisual performance, media art

We’re gathering with top digital media artists this week – and you can tune in. Here’s a preview of their work, on the eve of Lunchmeat Festival, Prague.

Transmedia work and live visual performance exist at sometimes awkward intersections, caught between economies of the art world and music industry, between academia and festivals. They mix techniques and histories that aren’t always entirely compatible – or at least that can be demanding in combination. But the fields of media art and live visuals also represent areas of tremendous potential for innovation – where artists can explore immersive media, saturate senses, and apply buzzword-friendly technologies from AI to VR in experimental, surprising ways.

Our goal: bring together some artists for some deep discussion. And we have a great venue in which to do it. Prague’s Lunchmeat Festival has exploded on the international scene. Even sandwiched against Unsound Festival in Krakow and ADE in Amsterdam, it’s started to earn attention and big lineups, thanks to the intrepid work of an underground Czech collective. (The rest of the year, the Lunchmeat crew can usually be found doing installations and live visual club work of their own.)

Heck, even the fact that I’m stumbling over how to word this says something about the hybrid forms we’re describing, from live cinema to machine learning-infused art.

Since most of you won’t be in Prague this week, we’ll livestream and archive those conversations for the whole world.

Follow the event on Facebook for the schedule and add CDM to your Facebook likes to get a notification when our video starts, and stay tuned to CDM for the latest updates.

To whet your appetite (hopefully), here’s a look at the cast of characters involved:

Katerina Blahutova [DVDJ NNS]

Let’s start for a change with the home Prague team. Katerina is a great example of a new generation of artists coming from outside conventional pathways as far as discipline. She graduated in architecture and urbanism, then shifted that interest (consciously or otherwise) to transforming whole club and performance environments. She’s been a VJ and curator with Lunchmeat, designed releases and videos for Genot Centre (as well as graphic design for bands), then went on to co-found LOLLAB collective and tour with MIDI LIDI.

Don’t miss her poppy, saturated, post-Internet surrealism – hyperreality with concoctions of slime and object, opaque luminosities and lushly-colored, fragmented textures. (I can rip off this bit of the program; I wrote it originally!)

Oh yeah, and she made this nice teaser loop for this week’s festivities:

teaser loop from upcoming vj set for @malumzkole at @lunchmeat_cz #dvdjnns #wip

A post shared by Katla / DVDJ NNS (@katlanns) on

Ignazio Mortellaro [Stroboscopic Artefacts, Roots in Heaven]

Turn that saturation knob all the way down again, and step into the world of Stroboscopic Artefacts. Ignazio is the visual imagination behind all of that label’s distinctive look, from album design (as beautifully exhibited) to videos. He’ll be talking to us about that ongoing collaboration.

In addition, Ignazio is doing live visuals for a fresh project. Allow me to quote myself:

Roots in Heaven, a label owner and accomplished solo artist hidden behind a mesh mask and feathers, joins visualist Ignazio Mortellaro to present a new live audiovisual work. This comes on the heals of this year’s Roots in Heaven debut record “Petites Madeleines” (a Proust reference), out on K7! offshoot Zehnin. The result is a journey into “concentrated sensory impression” in sound, light, and sensation.

Gregory Eden [Clark]

One of the goals Lunchmeat’s curators and I discussed was elevating the visibility of people working on visual materials. But unlike the ‘front man’/’front woman’ role of a lot of the music artists, the position some of these people fill goes beyond just sole artist to broader management and production. Maybe that’s even more reason to pay attention to who they are and how they work.

Greg Eden, who’s at Lunchmeat with Clark, is a great example. With a university physics degree, he went on to Warp, where he developed Clark and Boards of Canada. He’s now full-time managing Clark, and in addition to that … uh, full time job … manages Nathan Fake (with visuals by Flat-e) and Gajek and Finn McNicholas.

Visuals are often synonymous with just “something on a projector,” live cinema-style. But Clark’s show is full-on stage show. For the stage adaptation of Death Peak, the artist works with choreographer Melanie Lane, dancers Kiani Del Valle and Sophia Ndaba, and lights from London’s Flat-E. Think of it as rave theater. That makes Greg’s role doubly interesting, as someone has to pull all of this together:

Novi_sad [with Ryoichi Kurokawa, SIRENS]

The collaboration between Novi_sad and Ryoichi Kurokawa is one of the more important ones of the moment, its nervous, quivering economic data visualization a fitting expression of our anxious zeitgeist. Here’s a glimpse of that work:

Ryoichi Kurokawa and Novi_sad have worked together to produce an audiovisual show in five etudes that produces a dramaturgy of data, weaving the numbers of the economic downturn into poignant, emotional narrative. Data and sound quiver and dematerialize in eerie, mournful tableaus, re-imagining the sound works of Richard Chartier, CM von Hausswolff, Jacob Kirkegaard, Helge Sten, and Rebecca Foon. Novi_sad is self-taught composer Thanasis Kaproulias, himself coming not only from the nation that has borne the brunt of Europe’s crisis, but holding a degree in economics. As a perfect foil to his sonic landscapes, Japan’s Ryoichi Kurokawa has made a name in expressive, exposed digital minimalism.

Marcel Weber (MFO) [Ben Frost] / Theresa Baumgartner [Jlin]

Ben Frost is already interesting from a collaborative standpoint, having worked with media like dance (Chunky Move, Wayne McGregor). The collaboration with MFO brings him together with one of Europe’s leading visual practitioners; Marcel will join us to talk about that but hopefully about his work for the likes of Berlin Atonal Festival, as well.

MFO has also designed the visuals for the sensational Jlin, but Theresa Baumgartner is touring with it – as well as working on production for Boiler Room. So, we have Theresa joining us from something of the in-the-trenches production perspective, as well.

Gene Kogan

VJing and live cinema are rooted in conventional compositing and processing. Even when they’re digital, we’re talking techniques mostly developed decades ago.

For something further afield, Gene Kogan will take us on a journey into deep generative work, machine learning and the new aesthetics that become possible with it. As AI begins to infuse itself with digital media, artists are indeed grappling with its potential. Gene is offering talks and workshops both here at Lunchmeat and at Ableton Loop next month, so now is a great time to check in with him. A bit about him:

Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code and art. Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.

I’ll be reviewing the resources he has for artists soon, too, so do stay tuned.

Gabriela Prochazka

Also coming from Prague, Gabriela has been guiding the INPUT program for Lunchmeat this fall, as well as being one of my collaborators (our installation is part of the exhibition this week). Its contents are mysterious so far, but a live AV work with Gabriela and Dné is also on tap.

See you in Prague or on the Internet, everyone!

Follow the event on Facebook for the schedule and add CDM to your Facebook likes to get a notification when our video starts, and stay tuned to CDM for the latest updates.

http://lunchmeatfestival.cz/2017/

The post Let’s talk craft and vision in live audiovisual performance, media art appeared first on CDM Create Digital Music.

Learn how to make trippy oscilloscope music with this video series

Call it stimulated synesthesia: there’s something really satisfying when your brain sees and hears a connection between image and sound. And add some extra magic when the image is on an oscilloscope. A new video series on YouTube shows you how to make this effect yourself.

Jerobeam Fenderson has begun a series on so-called “oscilloscope music.” The oscilloscope isn’t making the sounds – that’s opposite to how an oscilloscope works, as a signal visualization device. But by designing some nice reactive eye candy for the oscilloscope, then connecting some appropriate, edgy minimal music signal, you get, well – this:

Oooh, my, that’s tasty. Like biting into a big, juicy ripe [vegetarian version] tomato [meat-lovers version] raw steak.

So in the tutorial series, Fenderson clues us in to how he makes all this happen. And this could be an economical thing to play around with, as you’ll often find vintage oscilloscopes around a studio or on sale used.

Don’t miss the description on YouTube – there are tons of resources in there; it’s practically a complete bibliography on the topic in itself.

Part 2:

Plus, accompanying this series is an additional video and Max for Live patch demonstrating aliasing and sample rate, covered today on the CDM Newswire / Gear (our new home for breaking short-form news):
This Max for Live patch demonstrates critical digital audio concepts

More:
http://oscilloscopemusic.com/

The post Learn how to make trippy oscilloscope music with this video series appeared first on CDM Create Digital Music.