Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke

Light and sound, space and music – Christopher Bauder and Robert Henke continue to explore immersive integrated AV form. Here’s a look into how they create, following a new edition of their piece Deep Web.

Here’s an extensive interview with the two artists by EventElevator, including some gorgeous footage from Deep Web.

Deep Web premiered in 2016 at CTM Festival, but it returned this summer to the space for which it was created, Berlin’s Kraftwerk (a former power plant). And because both artists are such obsessive perfectionists – in technology, in formal refinement – it’s worth this second trip.

Christopher (founder of kinetic lighting firm WHITEvoid) and Robert (also known as Monolake and co-creator of Ableton Live) have worked together for a long time. A decade ago, I got to see (and document) ATOM at MUTEK in Montreal, which in some sense would prove a kind of study for a work like Deep Web. ATOM tightly fused sound and light, as mechanically-controlled balloons formed different arrangements in space. The array of balloons became almost like a kind of visualized three-dimensional sequencer.

Deep Web is on a grander scale, but many of the basic elements remain – winches moving objects, lights illuminating objects, spatial arrangements, synchronized sound and light, a free-ranging and percussive musical score with an organic, material approach to samples reduced to their barest elements and then rearranged. The dramaturgy is entirely abstract – a kind of narrative about an array and its volumetric transformations.

In Deep Web, color and sound create the progression of moods. At the live show I saw last weekend, Robert, jazzed on performance endorphins, was glad to chat at length with some gathered fans about his process. The “Deep Web” element is there, as a kind of collage of samples of information age collapsed geography. The sounds are disguised, but there are bits of cell phones, telecommunications ephemera, airport announcements, made into a kind of encoded symphony.

Photo: Ralph Larmann.
Photo: Ralph Larmann.

Whether you buy into this seems down to whether the artists’ particular take tickles your synesthesia and strikes some emotional resonance. But there is something balletic about this precise fusion of laser lines and globes, able to move freely through the architecture. Kraftwerk will again play host later this month to Atonal Festival, and that meeting of music and architecture is by contrast essentially about the void. One somber vertical projection rises like a banner behind the stage, and the vacated power plant is mostly empty vibrating air. Deep Web by contrast occupies electrifies that unused volume.

I spoke to Christopher to find out more about how the work has evolved and is executed.

Christopher and Robert at the helm of the show’s live AV controls. Photo: Christopher Bauder.
Lasers and equipment, from the side. Photo: Peter Kirn.

Robert’s music is surprisingly improvisational in the live performance versions of the piece. You could feel that the night I was there – even as Robert’s style is as always reserved, there’s a sense of flowing expression.

To create these delicate arrangements of lit globes and laser lines, Christopher and his team at WHITEvoid plan extensively in Rhino and Vektorworks – the architectural scoring that comes before the performance. The visual side is controlled with WHITEvoid’s own kinetic control software, KLC, which is based on the industry leading visual development / dataflow environment TouchDesigner.

Robert’s rig is Ableton Live, controlled by fader and knob boxes. There is a master timeline in Live – that’s the timeline bit to which Robert refers, and it is different from his usual performance paradigm as I understand it. That timeline in turn has “loads of automation parameters” that connect from Live’s music arrangement to TouchDesigner’s visual control. But Robert can also change and manipulate these elements as he plays, with the visuals responding in time.

The machines that make the magic happen. Photo: Christopher Bauder.
Photo: Christopher Bauder.

Different visual scenes load as presets. Each preset then has different controllable parameters – most have ten with realtime operation, Christopher tells CDM.

“[Visual parameters] can be speeds, colors, selection of lasers, individual parameters like seed number, range, position, etc.,” Christopher says. “In one scene, we are linking acceleration of a continuously running directional laser pattern to a re-trigger of a beat. So options are virtually endless. It’s almost never just on/off for anything – very dynamic.”

This question of light and space as instrument I think merits deeper and broader explanation. WHITEvoid are one of a handful of firms and artists developing that medium, both artistically and technically, in a fairly tight knit community of people around the world. Stay tuned; I hope to pay them another visit and talk to some of the other artists working in this direction.

You can check their work (and their tech) at their site:

https://www.whitevoid.com/

Christopher also provided some unique behind-the-scenes shots for us here, along with some images that reveal some of the attention to pattern and form.

The Red Balloon. Photo: Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Christopher Bauder.
Peter Kirn.

Previously:

The post Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke appeared first on CDM Create Digital Music.

Explore the visual wonders of TouchDesigner: Summit, free workshop video

Few software tools have proved as expressive in generative visuals or audiovisual performance as TouchDesigner. Get introduced to its AV powers in a new, free video – or if you can make it Montreal, get the full experience live.

TouchDesigner is a dataflow tool – a graphical, patchable development environment – uniquely suited to squeezing gorgeous eye candy out of your computer graphics card. It’s also special for being musical and modular. It’s pretty enough that I’ve seen its actual zoomable UI displayed as art in performances, but whether or not you share that with the audience, it’s a kind of digital, graphical counterpart to the renewed love of cables and patching in sound.

Russian-born, Berlin-based Stanislav Glazov has gone deep into that world both as a teacher and as an artist. (You can catch his visuals this week as part of the UY ZONE, a fashion-meets-performance immersive environment inside Berghain in Berlin, or as a solo artist or working with techno legend Dasha Rush around Europe and Russia.)

Stas is happy to help you decipher the mysterious arts of TouchDesigner work yourself in his online workshop series. But you’ll probably want to start at the beginning – or even if you have some TouchDesigner background, better understand Stas’ take on it. Over the weekend, he led a free online workshop, and now you can watch at your leisure on YouTube:

If that taste has you excited, though, you might want to think about being in Montréal in August – timed perfectly with the massive MUTEK festival.

It’s not the first time there’s been an event around this tool, but this is surely the biggest. The day program alone features:

  • 350 participants
  • 69 presenters
  • 45 workshops
  • 21 talks

And that’s all in 3 days, packed onto the Coeur des Sciences / UQAM campus. The organizers describe it as “an intensive forum and stimulating meeting ground for the TouchDesigner community to share knowledge and experiences, learn new skills, connect in person with your favorite TD mentors and peeps and make a lot of new friends and collaborators.”

The night program promises still more, with an “after dark” social program, with 404.zero, ELEMAUN / Ali Phi, our friend Procedural, and Woulg.

TouchDesigner for its part has been expanding with lots of new features, including a specialized module for performing with lasers. That in turn is being used in the incredible collaboration of Robert Henke and Christopher Bauder – hope to cover that more soon:

Full details:

https://2019.touchdesignersummit.com/

And to check out Stas’ paid video courses:

https://lichtpfad.selz.com/

Image at top – deadmau5, prepping as his live show is built in TouchDesigner. Find lots more inspiration like this on the blog – I could page through that all day:

https://www.derivative.ca/Blog/

The post Explore the visual wonders of TouchDesigner: Summit, free workshop video appeared first on CDM Create Digital Music.

In Adversarial Feelings, Lorem explores AI’s emotional undercurrents

In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.

Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.

And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:

Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.

Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.

And what about achieving intimacy with nets? He explains:

Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.

I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.

So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?

Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.

On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.

What data were you training on for the musical patterns?

MIDI – basically I trained the NN on patterns I create.

And wait, SysEx, what? What were you doing with that?

Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.

What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?

I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…

How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?

To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.

What was the role of instrument builders – what are we hearing in the sound, then?

Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.

I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?

I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.

You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?

It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.

The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?

I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.

More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.

What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?

I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.

Live in Berlin, 7 June

Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.

KEYS: Artificial Intelligence | Lexachast • Lorem • N1L & more [Facebook event]

Lorem project lives here:

http://www.studio-frames.com

The post In Adversarial Feelings, Lorem explores AI’s emotional undercurrents appeared first on CDM Create Digital Music.

Immerse yourself in the full live AV concert by raster’s Belief Defect

Computer and modular machine textures collide with explosions of projected particles and glitching colored textures. Now the full concert footage of the duo Belief Defect (on Raster) is out.

It’s tough to get quality full-length live performance video – previously writing about this performance I had to refer to a short excerpt; a lot of the time you can only say “you had to be there” and point to distorted cell phone snippets. So it’s nice to be able to watch a performance end-to-end from the comfort of your chair.

Transport yourself to the dirigible-scaled hollowed-out power plant above Kraftwerk (even mighty Tresor club is just the basement), from Atonal Festival. It’s a set that’s full of angry, anxious, crunchy-distorted goodness:

(Actually even having listened to the album a lot, it’s nice to sit and retrace the full live set and see how they composed/improvised it. I would say record your live sets, fellow artists, except I know about how the usual Recording Curse works – when the Zoom’s batteries are charged up and the sound isn’t distorted and you remember to hit record is so often … the day you play your worst. They escaped this somehow.)

And Belief Defect represent some of the frontier of what’s possible in epic, festival mainstage-sized experimentalism, both analog and digital, sonic and visual. I got to write extensively about their process, with some support from Native Instruments, and more in-depth here:

BELIEF DEFECT ON THEIR MASCHINE AND REAKTOR MODULAR RIG [Native Instruments blog]

— with more details on how you might apply this to your own work:

What you can learn from Belief Defect’s modular-PC live rig

While we’re talking Raster label – the label formerly Raster-Noton before it again divided so Olaf Bender’s Raster and Carsten Nicolai’s Noton could focus on their own direction – here’s some more. Dasha Rush joined Electronic Beats for a rare portrait of her process and approach, including the live audiovisual-dance collaboration with dancer/choreographer Valentin Tszin and, on visuals, Stanislav Glazov. (Glazov is a talented musician, as well, producing and playing as Procedural aka Prcdrl, as well as a total Touch Designer whiz.)

And Dasha’s work, elegantly balanced between club and experimental contexts with every mix between, is always inspired.

Here’s that profile, though I hope to check in more shortly with how Stas and Valentin work with Kinect and dance, as well as how Stas integrates visuals with his modular sound:

The post Immerse yourself in the full live AV concert by raster’s Belief Defect appeared first on CDM Create Digital Music.

Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki

Complex music conjures up radical, fluid architectures, vivid angles – why not experience those spatial and rhythmic structures together? Here’s insight into a music video this week in which experimental turntablism and 3D graphics collide.

And collide is the right word. Sound and image are all hard edges, primitive cuts, stimulating corners.

Shiva Feshareki is a London-born composer and turntablist; she’s also got a radio show on NTS. With a research specialization in Daphne Oram (there’s a whole story there, even), she’s made a name for herself as one of the world’s leading composers working with turntables as medium, playing to the likes of the Royaal Albert Hall with the London Contemporary Orchestra. Her sounds are themselves often spatial and architectural, too – not just taking over art spaces, but working with spatial organization in her compositions.

That makes a perfect fit with the equally frenetic jump cuts and spinning 3D architectures of visualist Daniel James Oliver Haddock. (He’s a man with so many dimensions they named him four times over.)

NEW FORMS, her album on Belfast’s Resist label, explores the fragmented world of “different social forms,” a cut-up analog to today’s sliced-up, broken society. The abstract formal architecture, then, has a mission. As she writes in the liner notes: “if I can demonstrate sonically how one form can be vastly transformed using nothing other than its own material, then I can demonstrate this complexity and vastness of perspective.”

You can watch her playing with turntables and things around and atop turntables on Against the Clock for FACT:

And grab the album from Bandcamp:

Shiva herself works with graphical scores, which are interpreted in the album art by artist Helena Hamilton. Have a gander at that edition:

But since FACT covered the sound side of this, I decided to snag Daniel James Oliver Haddock. Daniel also wins the award this week for “quickest to answer interview questions,” so hey kids, experimental turntablism will give you energy!

Here’s Daniel:

The conception formed out of conversations with Shiva about the nature of her work and the ways in which she approaches sound. She views sound as these unique 3D structures which can change and be manipulated. So I wanted to emulate that in the video. I also was interested in the drawings and diagrams that she makes to plan out different aspects of her performances, mapping out speakers and sound scapes, I thought they were really beautiful in a very clinical way so again I wanted to use them as a staging point for the 3D environments.

I made about 6 environments in cinema 4d which were all inspired by these drawings. Then animated these quite rudimentary irregular polyhedrons in the middle to kind of represent various sounds.

Her work usually has a lot of sound manipulation, so I wanted the shapes to change and have variables. I ended up rendering short scenes in different camera perspectives and movements and also changing the textures from monotone to colour.

After all the Cinema 4d stuff, it was just a case of editing it all together! Which was fairly labour intensive, the track is not only very long but all the sounds have a very unusual tempo to them, some growing over time and then shortening, sounds change and get re-manipulated so that was challenging getting everything cut well. I basically just went through second by second with the waveforms and matched sounds by eye. Once I got the technique down it moved quite quickly. I then got the idea to involve some found footage to kind of break apart the aesthetic a bit.

Of course, there’s a clear link here to Autechre’s Gantz Graf music video, ur-video of all 3D music videos after. But then, there’s something really delightful about seeing those rhythms visualized when they’re produced live on turntables. Just the VJ in me really wants to see the visuals as live performance. (Well, and to me, that’s easier to produce than the Cinema 4D edits!)

But it’s all a real good time with at the audio/visual synesthesia experimental disco.

More:

Watch experimental turntablist Shiva Feshareki’s ‘V-A-C Moscow’ video [FACT]

https://www.shivafeshareki.co.uk/

https://resistbelfast.bandcamp.com/album/new-forms

Resist label

The post Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki appeared first on CDM Create Digital Music.

Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration

The OP-Z may be the hot digital synth of the moment, but it’s also the first consumer music instrument to have dedicated features for live visuals. And that starts with lighting (DMX) and 3D visuals (Unity 3D).

One of various surprises about the OP-Z launch is this: there’s a dedicated track for controlling DMX. That’s the MIDI-like protocol that’s an industry standard for stage lighting, supported by lighting instruments and light boards.

Not a whole lot revealed here, but you get the sense that Teenage Engineering are committed to live visual applications:

There’s also integration with Unity 3D, for 2D and 3D animations you can sequence. This integration relies on MIDI, but they’ve gone as far as developing a framework for MIDI-controlled animations. Since Unity runs happily both on mobile devices and beefy desktop rigs, it’s a good match both for doing fun things with your iOS display (which the OP-Z uses anyway), and desktop machines with serious GPUs for more advanced AV shows.

Check out the framework so far on their GitHub:

https://github.com/teenageengineering/videolab

We’ll talk to Teenage Engineering to find out more about what they’re planning here, because #createdigitalmotion.

https://teenageengineering.com/products/op-z

The post Teenage Engineering OP-Z has DMX track for lighting, Unity 3D integration appeared first on CDM Create Digital Music.

Vectors are getting their own festival: lasers and oscilloscopes, go!

It’s definitely an underground subculture of audiovisual media, but lovers of graphics made with vintage displays, analog oscilloscopes, and lasers are getting their own fall festival to share performances and techniques.

Vector Hack claims to be “the first ever international festival of experimental vector graphics” – a claim that is, uh, probably fair. And it’ll span two cities, starting in Zagreb, Croatia, but wrapping up in the Slovenian capital of Ljubljana.

Why vectors? Well, I’m sure the festival organizers could come up with various answers to that, but let’s go with because they look damned cool. And the organizers behind this particular effort have been spitting out eyeball-dazzling artwork that’s precise, expressive, and unique to this visceral electric medium.

Unconvinced? Fine. Strap in for the best. Festival. Trailer. Ever.

Here’s how they describe the project:

Vector Hack is the first ever international festival of experimental vector graphics. The festival brings together artists, academics, hackers and performers for a week-long program beginning in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18.

Vector Hack will allow artists creating experimental audio-visual work for oscilloscopes and lasers to share ideas and develop their work together alongside a program of open workshops, talks and performances aimed at allowing young people and a wider audience to learn more about creating their own vector based audio-visual works.

We have gathered a group of fifteen participants all working in the field from a diverse range of locations including the EU, USA and Canada. Each participant brings a unique approach to this exiting field and it will be a rare chance to see all their works together in a single program.

Vector Hack festival is an artist lead initiative organised with
support from Radiona.org/Zagreb Makerspace as a collaborative international project alongside Ljubljana’s Ljudmila Art and Science Laboratory and Projekt Atol Institute. It was conceived and initiated by Ivan Marušić Klif and Derek Holzer with assistance from Chris King.

Robert Henke is featured, naturally – the Berlin-based artist and co-founder of Ableton and Monolake has spent the last years refining his skills in spinning his own code to control ultra-fine-tuned laser displays. But maybe what’s most exciting about this scene is discovering a whole network of people hacking into supposedly outmoded display technologies to find new expressive possibilities.

One person who has helped lead that direction is festival initiator Derek Holzer. He’s finishing a thesis on the topic, so we’ll get some more detail soon, but anyone interested in this practice may want to check out his open source Pure Data library. The Vector Synthesis library “allows the creation and manipulation of vector shapes using audio signals sent directly to oscilloscopes, hacked CRT monitors, Vectrex game consoles, ILDA laser displays, and oscilloscope emulation software using the Pure Data programming environment.”

https://github.com/macumbista/vectorsynthesis

The results are entrancing – organic and synthetic all at once, with sound and sight intertwined (both in terms of control signal and resulting sensory impression). That is itself perhaps significant, as neurological research reveals that these media are experienced simultaneously in our perception. Here are just two recent sketches for a taste:

They’re produced by hacking into a Vectrax console – an early 80s consumer game console that used vector signals to manipulate a cathode ray screen. From Wikipedia, here’s how it works:

The vector generator is an all-analog design using two integrators: X and Y. The computer sets the integration rates using a digital-to-analog converter. The computer controls the integration time by momentarily closing electronic analog switches within the operational-amplifier based integrator circuits. Voltage ramps are produced that the monitor uses to steer the electron beam over the face of the phosphor screen of the cathode ray tube. Another signal is generated that controls the brightness of the line.

Ted Davis is working to make these technologies accessible to artists, too, by developing a library for coding-for-artists tool Processing.

http://teddavis.org/xyscope/

Oscilloscopes, ready for interaction with a library by Ted Davis.

Ted Davis.

Here’s a glimpse of some of the other artists in the festival, too. It’s wonderful to watch new developments in the post digital age, as artists produce work that innovates through deeper excavation of technologies of the past.

Akiras Rebirth.

Alberto Novell.

Vanda Kreutz.

Stefanie Bräuer.

Jerobeam Fenderson.

Hrvoslava Brkušić.

Andrew Duff.

More on the festival:
https://radiona.org/
https://wiki.ljudmila.org/Main_Page

http://vectorhackfestival.com/

The post Vectors are getting their own festival: lasers and oscilloscopes, go! appeared first on CDM Create Digital Music.

Moving AV architectures of sine waves: Zeno van den Broek

Dutch-born, Danish-based audiovisual artist Zeno van den Broek continues to enchant with his immersive, minimalistic constructions. We talk to him about how his work clicks.

Zeno had a richly entrancing audiovisual release with our Establishment label in late 2016, Shift Symm. But he’s been prolific in his work for AV sound, with structures made of vector lines in sight and raw, chest rattling sine waves. It’s abstract an intellectual in the sense of there’s always a clear sense of form and intent – but it’s also visceral, both for the eyes and ears, as these mechanisms are set into motion, overlapping and interacting. They tug you into another world.

Zeno is joining a lineup of artists around our Establishment label tonight in Berlin – come round if you see this in time and happen to be in town with us.

But wherever you are, we want to share his work and the way he thinks about it.

CDM: So you’ve relocated from the Netherlands to Copenhagen – what’s that location like for you now, as an artist or individually?

Zeno: Yes, I’ve been living there for little over two years now; it’s been a very interesting shift both personally and workwise. Copenhagen is a very pleasant city to live in – it’s so spacious, green and calm. For my work, it took some more time to feel at home, since it’s structured quite differently from Holland, and interdisciplinary work isn’t as common as in Amsterdam or Berlin. I’ve recently joined a composers’ society, which is a totally new thing to me, so I’m very curious to see where this will lead in the future. Living in such a tranquil environment has enabled me to focus my work and to dive deeper into the concepts behind my work, it feels like a good and healthy base to explore the world from, like being in Berlin these days!

Working with these raw elements, I wonder how you go about conceiving the composition. Is there some experimentation process, adjustment? Do you stand back from it and work on it at all?

Well, it all starts from the concepts. I’ve been adapting the ‘conceptual art’ practise more and more, by using the ideas as the ‘engine’ that creates the work.

For Paranon, this concept came to life out of the desire to deepen my knowledge of sine waves and interference, which always play a role in my art but often more in an instinctive way. Before I created a single tone of Paranon I did more research on this subject and discovered the need for a structural element in time: the canon, which turned out to a very interesting method for structuring sine wave developments and to create patterns of interference that emerge from the shifting repetitions.

Based on this research, I composed canon structures for various parameters of my sine wave generators, such as frequency deviation and phase shifting, and movements of visual elements, such as lines and grids. After reworking the composition into Ableton, I pressed play and experienced the outcome. It doesn’t make sense to me to do adjustments or experiment with the outcome of the piece because all decisions have a reason, related to the concept. To me, those reasons are more important than if something sounds pleasant.

If I want to make changes, I have to go back to the concept, and see where my translation from concept to sound or image can be interpreted differently.

There’s such a strong synesthetic element to how you merge audio and visual in all your works. Do you imagine visuals as you’re working with the sound? What do they look like?

I try to avoid creating an image based on the sound. To me, both senses and media are equally important, so I treat them equally in my methods, going from concept to creation. Because I work with fundamental elements in both the visuals and the sound — such as sine waves, lines, grids, and pulses — they create strong relationships and new, often unexpected, results appear from the merging of the elements.

Can you tell us a bit about your process – and I think this has changed – in terms of how you’re constructing your sonic and visual materials?

Yes, that’s true; I’ve been changing my tools to better match my methods. Because of my background in architecture, drawing was always the foundation of my work — to form structures and concepts, but also to create the visual elements. My audiovisual work Shift Symm was still mainly built up out of animated vector drawings in combination with generative elements.

But I’ve been working on moving to more algorithmic methods, because the connection to the concepts feels more natural and it gives more freedom, not being limited by my drawing ability and going almost directly from concept to algorithm to result. So I’ve been incorporating more and more Max in my Ableton sets, and I started using [Derivative] TouchDesigner for the visuals. So Paranon was completely generated in TouchDesigner.

You’ve also been playing out live a lot more. What’s evolving as you perform these works?

Live performances are really important to me, because I love the feeling of having to perform a piece on exactly that time and place, with all the tension of being able to f*** it up — the uncompromising and unforgiving nature of a performance. This tension, in combination with being able to shape the work to the acoustics of the venue, make a performance into something much bigger than I can rationally explain. It means that in order to achieve this I have to really perform it live: I always give myself the freedom to shape the path a performance takes, to time various phrases and transitions and to be able to adjust many parameters of the piece. This does give a certain friction with the more rational algorithmic foundation of the work but I believe this friction is exactly what makes a live performance worthwhile.

So on our release of yours Shift Symm, we got to play a little bit with distribution methods – which, while I don’t know if that was a huge business breakthrough, was interesting at least in changing the relationship to the listener. Where are you currently deploying your artwork; what’s the significance of these different gallery / performance / club contexts for you?

Yes our Shift Symm release was my first ‘digital only’ audiovisual release; this new form has given me many opportunities in the realm of film festivals, where it has been screened and performed worldwide. I enjoy showing my work at these film festivals because of the more equal approach to the sound and image and the more focused attention of the audience. But I also enjoy performing in a club context a lot, because of the energy and the possibilities to work outside the ‘black box’, to explore and incorporate the architecture of the venues in my work.

It strikes me that minimalism in art or sound isn’t what it once was. Obviously, minimal art has its own history. And I got to talk to Carsten Nicolai and Olaf Bender at SONAR a couple years back about the genesis of their work in the DDR – why it was a way of escaping images containing propaganda. What does it mean to you to focus on raw and abstract materials now, as an artist working in this moment? Is there something different about that sensibility – aesthetically, historically, technologically – because of what you’ve been through?

I think my love for the minimal aesthetics come from when I worked as an architect in programs like Autocad — the beautiful minimalistic world of the black screen, with the thin monochromatic lines representing spaces and physical structures. And, of course, there is a strong historic relation between conceptual art and minimalism with artists like Sol LeWitt.

But to me, it most strongly relates to what I want to evoke in the person experiencing my work: I’m not looking to offer a way to escape reality or to give an immersive blanket of atmosphere with a certain ambiance. I’m aiming to ‘activate’ by creating a very abstract but coherent world. It’s one in which expectations are being created, but also distorted the next moment — perspectives shift and the audience only has these fundamental elements to relate to which don’t have a predefined connotation but evoke questions, moments of surprise, and some insights into the conceptual foundation of the work. The reviews and responses I’m getting on a quite ‘rational’ and ‘objective’ piece like Paranon are surprisingly emotional and subjective, the abstract and minimalistic world of sound and images seemingly opens up and activates while keeping enough space for personal interpretation.

What will be your technical setup in Berlin tonight; how will you work?

For my Paranon performance in Berlin, I’ll work with custom-programmed sine wave generators in [Cycling ’74] Max, of which the canon structures are composed in Ableton Live. These structures are receive messages via OSC and audio signal is sent to TouchDesigner for the visuals. On stage, I’m working with various parameters of the sound and image that control fundamental elements of which the slightest alteration have a big impact in the whole process.

Any works upcoming next?

Besides performing and screening my audiovisual pieces such as Paranon and Hysteresis, I’m working on two big projects.

One is an ongoing concert series in the Old Church of Amsterdam, where the installation Anastasis by Giorgio Andreotta Calò filters all the natural light in the church into a deep red. In June, I’ve performed a first piece in the church, where I composed a short piece for organ and church bells and re-amplified this in the church with the process made famous by Alvin Lucier’s “I’m sitting in a room” — slowly forming the organ and bells to the resonant frequencies of the church. In August, this will get a continuation in a collaboration with B.J. Nilsen, expanding on the resonant frequencies and getting deeper into the surface of the bells.

The other project is a collaboration with Robin Koek named Raumklang: with this project, we aim to create immaterial sound sculptures that are based on the acoustic characteristics of the location they will be presented in. Currently, we are developing the technical system to realize this, based on spatial tracking and choreographies of recording. In the last months, we’ve done residencies at V2 in Rotterdam and STEIM in Amsterdam and we’re aiming to present a first prototype in September.

Thanks, Zeno! Really looking forward to tonight!

If you missed Shift Symm on Establishment, here’s your chance:

And tonight in Berlin, at ACUD:

Debashis Sinha / Jemma Woolmore / Zeno van den Broek / Marsch

http://zenovandenbroek.com

The post Moving AV architectures of sine waves: Zeno van den Broek appeared first on CDM Create Digital Music.

Speaking in signal, across the divide between video and sound: SIGINT

Performing voltages. The notion is now familiar in synthesis – improvising with signals – but what about the dance between noise and image? Artist Oliver Dodd has been exploring the audiovisual modular.

Integrated sound-image systems have been a fascination of the avant-garde through the history of electronic art. But if there’s a return to the raw signal, maybe that’s born of a desire to regain a sense of fusion of media that can be lost in overcomplicated newer work.

Underground label Detroit Underground has had one foot in technology, one in audiovisual output. DU have their own line of Eurorack modules and a deep interest in electronics and invention, matching a line of audiovisual works. And the label is even putting out AV releases on VHS tape. (Well, visuals need some answer to the vinyl phonograph. You were expecting maybe laserdiscs?)

And SIGINT, Oliver Dodd’s project, is one of the more compelling releases in that series. It debuted over the winter, but now feels a perfect time to delve into what it’s about – and some of Oliver’s other, evocative work.

First, the full description, which draws on images of scanning transmissions from space, but takes place in a very localized, Earthbound rig:

The concept of SIGINT is based on the idea of scanning, searching, and recording satellite transmissions in the pursuit of capturing what appear to be anomalies as intelligent signals hidden within the transmission spectrum.

SIGINT represents these raw recordings, captured in their live, original form. These audio-video recordings were performed and rendered to VHS in real-time in an attempt to experience, explore, decipher, study, and decode this deeply evocative, secret, and embedded form of communication whose origins appear both alien and unknown, like paranormal imprints or reflections of inter-dimensional beings reflected within the transmission stream.

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The Modular Audio/Video system allows a direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each. The modular system used for SIGINT was one 6U case of only Industrial Music Electronics (Harvestman) modules for audio and one 3U case of LZX Industries modules for video.

Videos:

Album:

CDM: I’m going through all these lovely experiments on your YouTube channel. How do these experiments come about?

Oliver: My Instagram and YouTube content is mostly just a snapshot of a larger picture of what I am currently working on, either that day, or of a larger project or work generally, which could be either a live performance, for example, or a release, or a video project.

That’s one hell of an AV modular system. Can you walk us through the modules in there? What’s your workflow like working in an audiovisual system like this, as opposed to systems (software or hardware) that tend to focus on one medium or another?

It’s a two-part system. There is one part that is audio (Industrial Music Electronics, or “Harvestman”), and there is one part that is video (LZX Industries). They communicate with each other via control voltages and audio rate signals, and they can independently influence each other in both ways or directions. For example, the audio can control the video, and the control voltages generated in the video system can also control sources in the audio system.

Many of the triggers and control voltages are shared between the two systems, which creates a cohesive audio/video experience. However, not every audio signal that sounds good — or produces a nice sound — looks good visually, and therefore, further tweaking and conditioning of the voltages are required to develop a more cohesive and harmonious relationship between them.

The two systems: a 3U (smaller) audio system on the left handles the Harvestman audio modules, and 6U (taller) on the right includes video processing modules from LZX Industries. Cases designed by Elite Modular.

I’m curious about your notion of finding patterns or paranormal in the content. Why is that significant to you? Carl Sagan gets at this idea of listening to noise in his original novel Contact (using the main character listening to a washing machine at one point, if I recall). What drew you to this sort of idea – and does it only say something about the listener, or the data, too?

Data transmission surrounds us at all times. There are always invisible frequencies that are outside our ability to perceive them, flowing through the air and which are as unobstructed as the air itself. We can only perceive a small fraction of these phenomena. There are limitations placed on our ability to perceive as humans, and there are more frequencies than we can experience. There are some frequencies we can experience, and there are some that we cannot. Perhaps the latter can move or pass throughout the range of perception, leaving a trail or trace or impressions on the frequencies that we can perceive as it passes through, and which we can then decode.

What about the fact that this is an audiovisual creation? What does it mean to fuse those media for a project?

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The modular audio/video system allows direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each.

And now, some loops…

Oliver’s “experiments” series is transcendent and mesmerizing:

If this were a less cruel world, the YouTube algorithm would only feed you this. But in the meantime, you can subscribe to his channel. And ignore the view counts, actually. One person watching this one video is already sublime.

Plus, from Oliver’s gorgeous Instagram account, some ambient AV sketches to round things out.

More at: https://www.instagram.com/_oliverdodd/

https://detund.bandcamp.com/

https://detund.bandcamp.com/album/sigint

The post Speaking in signal, across the divide between video and sound: SIGINT appeared first on CDM Create Digital Music.