Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke

Light and sound, space and music – Christopher Bauder and Robert Henke continue to explore immersive integrated AV form. Here’s a look into how they create, following a new edition of their piece Deep Web.

Here’s an extensive interview with the two artists by EventElevator, including some gorgeous footage from Deep Web.

Deep Web premiered in 2016 at CTM Festival, but it returned this summer to the space for which it was created, Berlin’s Kraftwerk (a former power plant). And because both artists are such obsessive perfectionists – in technology, in formal refinement – it’s worth this second trip.

Christopher (founder of kinetic lighting firm WHITEvoid) and Robert (also known as Monolake and co-creator of Ableton Live) have worked together for a long time. A decade ago, I got to see (and document) ATOM at MUTEK in Montreal, which in some sense would prove a kind of study for a work like Deep Web. ATOM tightly fused sound and light, as mechanically-controlled balloons formed different arrangements in space. The array of balloons became almost like a kind of visualized three-dimensional sequencer.

Deep Web is on a grander scale, but many of the basic elements remain – winches moving objects, lights illuminating objects, spatial arrangements, synchronized sound and light, a free-ranging and percussive musical score with an organic, material approach to samples reduced to their barest elements and then rearranged. The dramaturgy is entirely abstract – a kind of narrative about an array and its volumetric transformations.

In Deep Web, color and sound create the progression of moods. At the live show I saw last weekend, Robert, jazzed on performance endorphins, was glad to chat at length with some gathered fans about his process. The “Deep Web” element is there, as a kind of collage of samples of information age collapsed geography. The sounds are disguised, but there are bits of cell phones, telecommunications ephemera, airport announcements, made into a kind of encoded symphony.

Photo: Ralph Larmann.
Photo: Ralph Larmann.

Whether you buy into this seems down to whether the artists’ particular take tickles your synesthesia and strikes some emotional resonance. But there is something balletic about this precise fusion of laser lines and globes, able to move freely through the architecture. Kraftwerk will again play host later this month to Atonal Festival, and that meeting of music and architecture is by contrast essentially about the void. One somber vertical projection rises like a banner behind the stage, and the vacated power plant is mostly empty vibrating air. Deep Web by contrast occupies electrifies that unused volume.

I spoke to Christopher to find out more about how the work has evolved and is executed.

Christopher and Robert at the helm of the show’s live AV controls. Photo: Christopher Bauder.
Lasers and equipment, from the side. Photo: Peter Kirn.

Robert’s music is surprisingly improvisational in the live performance versions of the piece. You could feel that the night I was there – even as Robert’s style is as always reserved, there’s a sense of flowing expression.

To create these delicate arrangements of lit globes and laser lines, Christopher and his team at WHITEvoid plan extensively in Rhino and Vektorworks – the architectural scoring that comes before the performance. The visual side is controlled with WHITEvoid’s own kinetic control software, KLC, which is based on the industry leading visual development / dataflow environment TouchDesigner.

Robert’s rig is Ableton Live, controlled by fader and knob boxes. There is a master timeline in Live – that’s the timeline bit to which Robert refers, and it is different from his usual performance paradigm as I understand it. That timeline in turn has “loads of automation parameters” that connect from Live’s music arrangement to TouchDesigner’s visual control. But Robert can also change and manipulate these elements as he plays, with the visuals responding in time.

The machines that make the magic happen. Photo: Christopher Bauder.
Photo: Christopher Bauder.

Different visual scenes load as presets. Each preset then has different controllable parameters – most have ten with realtime operation, Christopher tells CDM.

“[Visual parameters] can be speeds, colors, selection of lasers, individual parameters like seed number, range, position, etc.,” Christopher says. “In one scene, we are linking acceleration of a continuously running directional laser pattern to a re-trigger of a beat. So options are virtually endless. It’s almost never just on/off for anything – very dynamic.”

This question of light and space as instrument I think merits deeper and broader explanation. WHITEvoid are one of a handful of firms and artists developing that medium, both artistically and technically, in a fairly tight knit community of people around the world. Stay tuned; I hope to pay them another visit and talk to some of the other artists working in this direction.

You can check their work (and their tech) at their site:

https://www.whitevoid.com/

Christopher also provided some unique behind-the-scenes shots for us here, along with some images that reveal some of the attention to pattern and form.

The Red Balloon. Photo: Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Christopher Bauder.
Peter Kirn.

Previously:

The post Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke appeared first on CDM Create Digital Music.

In Adversarial Feelings, Lorem explores AI’s emotional undercurrents

In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.

Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.

And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:

Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.

Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.

And what about achieving intimacy with nets? He explains:

Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.

I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.

So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?

Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.

On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.

What data were you training on for the musical patterns?

MIDI – basically I trained the NN on patterns I create.

And wait, SysEx, what? What were you doing with that?

Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.

What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?

I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…

How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?

To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.

What was the role of instrument builders – what are we hearing in the sound, then?

Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.

I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?

I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.

You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?

It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.

The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?

I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.

More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.

What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?

I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.

Live in Berlin, 7 June

Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.

KEYS: Artificial Intelligence | Lexachast • Lorem • N1L & more [Facebook event]

Lorem project lives here:

http://www.studio-frames.com

The post In Adversarial Feelings, Lorem explores AI’s emotional undercurrents appeared first on CDM Create Digital Music.

Immerse yourself in the full live AV concert by raster’s Belief Defect

Computer and modular machine textures collide with explosions of projected particles and glitching colored textures. Now the full concert footage of the duo Belief Defect (on Raster) is out.

It’s tough to get quality full-length live performance video – previously writing about this performance I had to refer to a short excerpt; a lot of the time you can only say “you had to be there” and point to distorted cell phone snippets. So it’s nice to be able to watch a performance end-to-end from the comfort of your chair.

Transport yourself to the dirigible-scaled hollowed-out power plant above Kraftwerk (even mighty Tresor club is just the basement), from Atonal Festival. It’s a set that’s full of angry, anxious, crunchy-distorted goodness:

(Actually even having listened to the album a lot, it’s nice to sit and retrace the full live set and see how they composed/improvised it. I would say record your live sets, fellow artists, except I know about how the usual Recording Curse works – when the Zoom’s batteries are charged up and the sound isn’t distorted and you remember to hit record is so often … the day you play your worst. They escaped this somehow.)

And Belief Defect represent some of the frontier of what’s possible in epic, festival mainstage-sized experimentalism, both analog and digital, sonic and visual. I got to write extensively about their process, with some support from Native Instruments, and more in-depth here:

BELIEF DEFECT ON THEIR MASCHINE AND REAKTOR MODULAR RIG [Native Instruments blog]

— with more details on how you might apply this to your own work:

What you can learn from Belief Defect’s modular-PC live rig

While we’re talking Raster label – the label formerly Raster-Noton before it again divided so Olaf Bender’s Raster and Carsten Nicolai’s Noton could focus on their own direction – here’s some more. Dasha Rush joined Electronic Beats for a rare portrait of her process and approach, including the live audiovisual-dance collaboration with dancer/choreographer Valentin Tszin and, on visuals, Stanislav Glazov. (Glazov is a talented musician, as well, producing and playing as Procedural aka Prcdrl, as well as a total Touch Designer whiz.)

And Dasha’s work, elegantly balanced between club and experimental contexts with every mix between, is always inspired.

Here’s that profile, though I hope to check in more shortly with how Stas and Valentin work with Kinect and dance, as well as how Stas integrates visuals with his modular sound:

The post Immerse yourself in the full live AV concert by raster’s Belief Defect appeared first on CDM Create Digital Music.

Premiere: rituals of sound and rhythm in the latest from Mexico’s FAX

The Changing Landscape is the latest mystical outing from Mexican ambient/experimental electronic master FAX. And to launch into that world, we have a video that’s liquid, glitchy, a post-digital mind trip.

Let’s watch the music video, created by Hirám López:

Fax, aka Rubén Alonso Tamayo, is the epitome of a long-term artist. He’s got multiple decades of music to his name, spanning from dancefloor to far-out experimental soundscape, but always imbued with craft and thought. Ideally, you’ll get to hear Fax’s work in person – live, he creates earthquakes of sound and transports audiences to other planes. (I was lucky enough to catch him in Mexico City for the edition of MUTEK there.)

The Mexicali, Baja California-native artist is also a hub of activity in Mexico, across visual and sonic media. So for The Changing Landscape, we get free-flowing, spontaneous journeys full of the percussion work of Yamil Rezc.

The landscapes are organized into a diverse progression of “lands,” variations on a theme and instrumentation. “Land I” opens with a squelchy, exposed bassline before breaking into a gentle, jazzy jam. “Land II” is a stuttering, irregular ambient world, drums and piano idly ambling in stumbles over top waves of fuzzy pads. “Land IV” is more futuristic, pulsing synths glistening as noise crests and breaks across the stereo field. “Land V” crackles and cycles in some final parting ritual.

“Land III,” for which we get this video premiere, is clearly a highlight, an esoteric inner sanctum of the album, digital odd angles against a melancholy dialog of pad and bass.

FAX, photo by Braulio Lam.

Like the label he has co-founded, Static Discos, FAX works along borders of geography and medium. As often is the case, the personnel here come from that Mexican border town Mexicali. And visual collaborator Hirám López tunes into the trance-like, surreal-ultrareal quality of the work, writing:

FAX’s atmospheres and musical progressions submerged me in a hypnotic trance that I had to capture. Land III, was an experimentation exercise, where the human collages of Jung Sing were distorted to mix these characters even more through the aesthetics of the glitch. I used Adobe After Effects to replicate a series of visual alterations that bad coding can cause in today’s tech devices, based on the musical figures to give them a synchronized intention.

It’s all subtle, as is the music – the effect just disrupting the surface, a direct analog to the sonic approach in the album. As they write:

“Displacement mapping” was the technique that Hirám López used the most; It allows you to alternate pixel positions from a high contrast image, were the brightness intensity determines how the superimposed pixels on that image or map will move. Lopez’ method consisted in using several layers of this effect on Jung’s illustrations, placing keyframes and expressions (code that detects audio and converts it in a numeric value) that moved the distortion map along the x and y axes, in sync with the music. Under the concept of permanence of the disturbance, as a ghostly trace of the previous or later character, the “Datamoshing” effect created dynamic transitions, with this same tool. Due to its hypnotic effect, the waves and tunnels created with various plugins including “Ripple” and “Radio waves” were very helpful for depth simulation, the repetition of the illustrations, and the Mandelbrot type fractals to emphasize the trance.

Also, “masking” allowed López to cut out some elements from the characters in order to extend its fragmentation, also as a resource based on musical sync and especially on visual composition.

The full album is out on Bandcamp and other services from Static Discos.

Official release page:

http://staticdiscos.com/sta097/

For more – a mix from last year on the Dimension Series from the label:

The post Premiere: rituals of sound and rhythm in the latest from Mexico’s FAX appeared first on CDM Create Digital Music.

Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki

Complex music conjures up radical, fluid architectures, vivid angles – why not experience those spatial and rhythmic structures together? Here’s insight into a music video this week in which experimental turntablism and 3D graphics collide.

And collide is the right word. Sound and image are all hard edges, primitive cuts, stimulating corners.

Shiva Feshareki is a London-born composer and turntablist; she’s also got a radio show on NTS. With a research specialization in Daphne Oram (there’s a whole story there, even), she’s made a name for herself as one of the world’s leading composers working with turntables as medium, playing to the likes of the Royaal Albert Hall with the London Contemporary Orchestra. Her sounds are themselves often spatial and architectural, too – not just taking over art spaces, but working with spatial organization in her compositions.

That makes a perfect fit with the equally frenetic jump cuts and spinning 3D architectures of visualist Daniel James Oliver Haddock. (He’s a man with so many dimensions they named him four times over.)

NEW FORMS, her album on Belfast’s Resist label, explores the fragmented world of “different social forms,” a cut-up analog to today’s sliced-up, broken society. The abstract formal architecture, then, has a mission. As she writes in the liner notes: “if I can demonstrate sonically how one form can be vastly transformed using nothing other than its own material, then I can demonstrate this complexity and vastness of perspective.”

You can watch her playing with turntables and things around and atop turntables on Against the Clock for FACT:

And grab the album from Bandcamp:

Shiva herself works with graphical scores, which are interpreted in the album art by artist Helena Hamilton. Have a gander at that edition:

But since FACT covered the sound side of this, I decided to snag Daniel James Oliver Haddock. Daniel also wins the award this week for “quickest to answer interview questions,” so hey kids, experimental turntablism will give you energy!

Here’s Daniel:

The conception formed out of conversations with Shiva about the nature of her work and the ways in which she approaches sound. She views sound as these unique 3D structures which can change and be manipulated. So I wanted to emulate that in the video. I also was interested in the drawings and diagrams that she makes to plan out different aspects of her performances, mapping out speakers and sound scapes, I thought they were really beautiful in a very clinical way so again I wanted to use them as a staging point for the 3D environments.

I made about 6 environments in cinema 4d which were all inspired by these drawings. Then animated these quite rudimentary irregular polyhedrons in the middle to kind of represent various sounds.

Her work usually has a lot of sound manipulation, so I wanted the shapes to change and have variables. I ended up rendering short scenes in different camera perspectives and movements and also changing the textures from monotone to colour.

After all the Cinema 4d stuff, it was just a case of editing it all together! Which was fairly labour intensive, the track is not only very long but all the sounds have a very unusual tempo to them, some growing over time and then shortening, sounds change and get re-manipulated so that was challenging getting everything cut well. I basically just went through second by second with the waveforms and matched sounds by eye. Once I got the technique down it moved quite quickly. I then got the idea to involve some found footage to kind of break apart the aesthetic a bit.

Of course, there’s a clear link here to Autechre’s Gantz Graf music video, ur-video of all 3D music videos after. But then, there’s something really delightful about seeing those rhythms visualized when they’re produced live on turntables. Just the VJ in me really wants to see the visuals as live performance. (Well, and to me, that’s easier to produce than the Cinema 4D edits!)

But it’s all a real good time with at the audio/visual synesthesia experimental disco.

More:

Watch experimental turntablist Shiva Feshareki’s ‘V-A-C Moscow’ video [FACT]

https://www.shivafeshareki.co.uk/

https://resistbelfast.bandcamp.com/album/new-forms

Resist label

The post Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki appeared first on CDM Create Digital Music.

Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

A haunting ambient sci-fi album about a message from Neptune

Latlaus Sky’s Pythian Drift is a gorgeous ambient concept album, the kind that’s easy to get lost in. The set-up: a probe discovered on Neptune in the 26th Century will communicate with just one woman back on Earth.

The Portland, Oregon-based artists write CDM to share the project, which is accompanied by this ghostly video (still at top). It’s the work of Ukrainian-born filmmaker Viktoria Haiboniuk (now also based in Portland), who composed it from three years’ worth of 120mm film images.

Taking in the album even before checking the artists’ perspective, I was struck by the sense of post-rocket age music about the cosmos. In this week when images of Mars’ surface spread as soon as they were received, a generation that grew up as the first native space-faring humans, space is no longer alien and unreachable, but present.

In slow-motion harmonies and long, aching textures, this seems to be cosmic music that sings of longing. It calls out past the Earth in hope of some answer.

The music is the work of duo Brett and Abby Larson. Brett explains his thinking behind this album:

This album has roots in my early years of visiting the observatory in Sunriver, Oregon with my Dad. Seeing the moons of Jupiter with my own eyes had a profound effect on my understanding of who and where I was. It slowly came to me that it would actually be possible to stand on those moons. The ice is real, it would hold you up. And looking out your black sky would be filled with the swirling storms of Jupiter’s upper clouds. From the ice of Europa, the red planet would be 24 times the size of the full moon.

Though these thoughts inspire awe, they begin to chill your bones as you move farther away from the sun. Temperatures plunge. There is no air to breathe. Radiation is immense. Standing upon Neptune’s moon Triton, the sun would begin to resemble the rest of the stars as you faded into the nothing.

Voyager two took one of the only clear images we have of Neptune. I don’t believe we were meant to see that kind of image. Unaided our eyes are only prepared to see the sun, the moon, and the stars. Looking into the blue clouds of the last planet you cannot help but think of the black halo of space that surrounds the planet and extends forever.

I cannot un-see those images. They have become a part of human consciousness. They are the dawn of an unnamed religion. They are more powerful and more fearsome than the old God. In a sense, they are the very face of God. And perhaps we were not meant to see such things.

This album was my feeble attempt to make peace with the blackness. The immense cold that surrounds and beckons us all. Our past and our future.

The album closes with an image of standing amidst Pluto’s Norgay mountains. Peaks of 20,000 feet of solid ice. Evening comes early in the mountains. On this final planet we face the decision of looking back toward Earth or moving onward into the darkness.

Abby with pedals. BOSS RC-50 LoopStation (predecessor to today’s RC-300), Strymon BlueSky, Electro Harmonix Soul Food stand out.

Plus more on the story:

Pythia was the actual name of the Oracle at Delphi in ancient Greece. She was a real person who, reportedly, could see the future. This album, “Pythian Drift” is only the first of three parts. In this part, the craft is discovered and Dr. Amala Chandra begins a dialogue with the craft. Dr Chandra then begins publishing papers that rock the scientific world and reformulate our understanding of mathematics and physics. There is also a phenomenon called Pythian Drift that begins to spread from the craft. People begin to see images and hear voices, prophecies. Some prepare for an interstellar pilgrimage to the craft’s home galaxy in Andromeda.

Part two will be called Black Sea. Part three will be Andromeda.

And some personal images connected to that back story:

Brett as a kid, with ski.

Abby aside a faux fire.

More on the duo and their music at the Látlaus Ský site:

http://www.latlaussky.com/

Check out Viktoria’s work, too:

https://www.jmiid.com/

The post A haunting ambient sci-fi album about a message from Neptune appeared first on CDM Create Digital Music.

In gorgeous ETHER, a handmade micro lens brings cymatics closer

Sound is physical, but we don’t often get to see that physicality. In this gorgeous video for Thomas Vaquié, directed by Nico Neefs, those worlds of vibrations explode across your screen. It’s the latest release from ANTIVJ, and it’s spellbinding.

The sounds really do generate the visuals here, from generating terrain from an analysis from the waveform to revealing footage of metal powder animated by sonic vibrations. A self-made micro lens provides the optics.

https://www.youtube.com/watch?v=aK0BXH7zu-M

Everything in this video was made using the sound waves of the track Ether.
Equipped with a home-made micro lens, a camera travels inside physical representations of the musical composition, from a concrete mountain built from the spectrogram of the music, to eruptions of metal powder caused by rhythmic impulsions.

(Impulsion is a word; look it up! I had to do so.)

Still from the video.

Nico Neefs is the director, working with images he created with Corentin Kopp. It’s set to music from Belgian producer Thomas Vaquié’s new album Ecume, on Antivj Recordings. That imprint has for over a decade been a label for audiovisual creations across media – release, installation, performance. Simon Geilfus developed the tool for visualization.

They’ve employed the same techniques to make a very attractive physical release. The image you see in the artwork is cast from a concrete mold. For a limited edition box set, they’re producing 33cm x 33cm plates cast from that mold in dark resin. And it’s ready to mount to a wall if you choose; hardware included. Or if you feel instead like you own enough things, there’s a digital edition.

Ultra-limited handmade physical release.

Concrete mold.

Concrete mold; detail.

The whole album is beautiful; I’m especially fond of the bell-like resonances in the opening piece. It’s a sumptuous, sonic environment, full of evocative sound designs that rustle and ring in easy, organic assemblies, part synthetic, part string. Those then break into broken, warped grooves that push forward. (Hey, more impulsion – like a horse.)

The music was repurposed from installations and art contexts:

These are all derivations of compositions for site-specific and installation projects, the original pieces having been created as a response to place and space, to light and architecture, to code and motion. Now separated and transformed from their original context, the music takes on an independent existence in these new realisations.

That does lend the whole release an environmental quality – spaces you can step in and out of – but is nonetheless present emotionally. There’s impact, listening top to bottom, enough so that you might not immediately assume the earlier context. And the release is fully consistent and coherent as a whole. (It is very possible you heard an installation here or there. Vaquié has produced compositions for Centre Pompidou Metz the Old Port of Montreal’s metallic conveyor tower, in Songdo South Korea, at Oaxaca’s ethnobotanical gardens, and at Hala Stulecia, Poland’s huge concrete dome.)

And there’s thoroughly fine string writing throughout – with a sense that strings and electronic media are always attuned to one another.

Cover artwork.

Thomas Vaquié.

Poetic explanation accompanies the album:

Ether embodies the world that exists above the skies.
It is the air that the gods breathe.
It is that feeling of dizziness,
that asphyxiation that we feel when faced with immensity.

Full video credits:

Music by Thomas Vaquié
Video directed by Nico Neefs
Images by Nico Neefs & Corentin Kopp
Edit & Post-production by Nico Neefs
Video produced by Charles Kinoo for Less Is More Studio and Thomas Vaquié
Filmed at BFC Studio, Brussels 2018.

More, including downloads / physical purchases:

https://thomasvaquie.bandcamp.com/

Plus:
www.thomasvaquie.com
www.antivj.com

The post In gorgeous ETHER, a handmade micro lens brings cymatics closer appeared first on CDM Create Digital Music.

Max 8: Multichannel, mappable, faster patching is here

Max 8 is released today, as the latest version of the audiovisual development environment brings new tools, faster performance, multichannel patching, MIDI learn, and more.

Max is now 30 years old, with a direct lineage to the beginning of visual programming for musicians – creating your own custom tools by connecting virtual cables on-screen instead of typing in code. Since then, its developers have incorporated additional facilities for other code languages (like JavaScript), different data types, real-time visuals (3D and video), and integrated support inside Ableton Live (with Max for Live). Max 8 actually hits all of those different points with improvements. Here’s what’s new:

MC multichannel patching.

It’s always been possible to do multichannel patching – and therefore support multichannel audio (as with spatial sound) – in Max and Pure Data. But Max’s new MC approach makes this far easier and more powerful.

  • Any sound object can be made into multiples, just by typing mc. in front of the object name.
  • A single patch cord can incorporate any number of channels.
  • You can edit multiple objects all at once.

So, yes, this is about multichannel audio output and spatial audio. But it’s also about way more than that – and it addresses one of the most significant limitations of the Max/Pd patching paradigm.

Polyphony? MC.

Synthesis approaches with loads of oscillators (like granular synthesis or complex additive synthesis)? MC.

MPE assignments (from controllers like the Linnstrument and ROLI Seaboard)? MC.

MC means the ability to use a small number of objects and cords to do a lot – from spatial sound to mass polyphony to anything else that involves multiples.

It’s just a much easier way to work with a lot of stuff at once. That was present in open code environment SuperCollider, for instance, if you were willing to put in some time learning SC’s code language. But it was never terribly easy in Max. (Pure Data, your move!)

MIDI mapping

Mappings lets you MIDI learn from controllers, keyboards, and whatnot, just by selecting a control, and moving your controller.

Computer keyboard mappings work the same way.

The whole implementation looks very much borrowed from Ableton Live, down to the list of mappings for keyboard and MIDI. It’s slightly disappointing they didn’t cover OSC messages with the same interface, though, given this is Max.

It’s faster

Max 8 has various performance optimizations, says Cycling ’74. But in particular, look for 2x (Mac) – 20x (Windows) faster launch times, 4x faster patching loading, and performance enhancements in the UI, Jitter, physics, and objects like coll.

Also, Max 8’s Vizzie library of video modules is now OpenGL-accelerated, which additionally means you can mix and match with Jitter OpenGL patching. (No word yet on what that means for OpenGL deprecation by Apple.)

Node.JS

This is I suspect a pretty big deal for a lot of Max patchers who moonlight in some JavaScript coding. NodeJS support lets you run Node applications from inside a patch – for extending what Max can do, running servers, connecting to the outside world, and whatnot.

There’s full NPM support, which is to say all the ability to share code via that package manager is now available inside Max.

Patching works better, and other stuff that will make you say “finally”

Actually, this may be the bit that a lot of long-time Max users find most exciting, even despite the banner features.

Patching is now significantly enhanced. You can patch and unpatch objects just by dragging them in and out of patch cords, instead of doing this in multiple steps. Group dragging and whatnot finally works the way it should, without accidentally selecting other objects. And you get real “probing” of data flowing through patch cords by hovering over the cords.

There’s also finally an “Operate While Unlocked” option so you can use controls without constantly locking and unlocking patches.

There’s also a refreshed console, color themes, and a search sidebar for quickly bringing up help.

Plus there’s external editor support (coll, JavaScript, etc.). You can use “waypoints” to print stuff to the console.

And additionally, essential:

High definition and multitouch support on Windows
UI support for the latest Mac OS
Plug-in scanning

And of course a ton of new improvements for Max objects and Jitter.

What about Max for Live?

Okay, Ableton and Cycling ’74 did talk about “lockstep” releases of Max and Max for Live. But… what’s happening is not what lockstep usually means. Maybe it’s better to say that the releases of the two will be better coordinated.

Max 8 today is ahead of the Max for Live that ships with Ableton Live. But we know Max for Live incorporated elements of Max 8, even before its release.

For their part, Cycling ’74 today say that “in the coming months, Max 8 will become the basis of Max for Live.”

Based on past conversations, that means that as much functionality as possibly can be practically delivered in Max for Live will be there. And with all these Max 8 improvements, that’s good news. I’ll try to get more clarity on this as information becomes available.

Max 8 now…

Ther’s a 30-day free trial. Upgrades are US$149; full version is US$399, plus subscription and academic discount options.

Full details on the new release are neatly laid out on Cycling’s website today:

https://cycling74.com/products/max-features?utm_source=press&utm_campaign=max8-release

The post Max 8: Multichannel, mappable, faster patching is here appeared first on CDM Create Digital Music.

This light sculpture plays like an instrument, escaped from Tron

Espills is a “solid light dynamic sculpture,” made of laser beams, laser scanners, and robotic mirrors. And it makes a real-life effect that would make Tron proud.

The work, made public this month but part of ongoing research, is the creation of multidisciplinary Barcelona-based AV team Playmodes. And while large-scale laser projects are becoming more frequent in audiovisual performance and installation, this one is unique both in that it’s especially expressive and a heavily DIY project. So while dedicated vendors make sophisticated, expensive off-the-shelf solutions, the Playmodes crew went a bit more punk and designed and built many of their own components. That includes robotic mirrors, light drawing tools, synths, scenery, and even the laser modules. They hacked into existing DMX light fixtures, swapping mirrors for lamps. They constructed their own microcontroller solutions for controlling the laser diodes via Artnet and DMX.

And, oh yeah, they have their own visual programming framework, OceaNode, a kind of home-brewed solution for imagining banks of modulation as oscillators, a visual motion synth of sorts.

It’s in-progress, so this is not a Touch Designer rival so much as an interesting homebrew project, but you can toy around with the open source software. (Looks like you might need to do some work to get it to build on your OS of choice.)

https://github.com/playmodesStudio/ofxoceanode

Typically, too, visual teams work separately from music artists. But adding to the synesthesia you feel as a result, they coupled laser motion directly to sound, modding their own synth engine with Reaktor. (OceaNode sends control signal to Reaktor via the now-superior OSC implementation in the latter.)

They hacked that synth engine together from Santiago Vilanova’s PolyComb – a beautiful-sounding set of resonating tuned oscillators (didn’t know this one, now playing!):

https://www.native-instruments.com/es/reaktor-community/reaktor-user-library/entry/show/9717/

Oh yeah, and they made a VST plug-in to send OSC from Reaper, so they can automate OSC envelopes using the Reaper timeline.

OceaNode, visual programming software, also a DIY effort by the team.

… and the DIY OSC VST plug-in, to allow easy automation from a DAW (Reaper, in this case).

It’s really beautiful work. You have to notice that the artists making best use of laser tech – see also Robert Henke and Christopher Bauder here in Berlin – are writing some of their own code, in order to gain full control over how the laser behaves.

I think we’ll definitely want to follow this work as it evolves. And if you’re working in similar directions, let us know.

The post This light sculpture plays like an instrument, escaped from Tron appeared first on CDM Create Digital Music.