Someone replicated a GPU in JavaScript – and it runs in characters in a terminal

Imagine the love child of character art and modern GPUs. Okay, probably you totally can’t imagine that, but someone did it anyway, entirely in JavaScript.

And I do mean replicated the GPU. The appropriately dubbed South Korean user “sinclairzx81” built this with a scene graph, and the usual math libraries, and programmable shaders.

Just instead of those programmable shaders running on a GPU, they run natively in JavaScript and output to the characters in the standard Windows, Mac, and Linux terminals.

It seems crazy, but this does demonstrate the … uh … actually, it really is completely crazy, but it is very cool. And it does genuinely output all of this via stdout in Node.js. The author claims the reasoning is “to see how far one could reasonably push JavaScript performance,” but “just because I could” seems as likely an explanation.

Check these features in the free library, dubbed Zero, I guess in reference to the number of real GPUs involved:

  • Programmable Vertex and Fragment shaders (in JavaScript)
  • Perspective Z-Correct Texture Mapping
  • Per Pixel Depth Buffering
  • Adaptive Resolution on Terminal resize (TTY only)
  • Matrix and Vector math libraries.
  • A Scene Graph
  • Support for Windows Command Prompt, Powershell and Linux Terminals

It is licensed under an MIT license, so you could build on this. At the very least, I guess the OLED on your next hardware synth has no excuse not to render something interesting.

https://github.com/sinclairzx81/zero

The post Someone replicated a GPU in JavaScript – and it runs in characters in a terminal appeared first on CDM Create Digital Music.

A poetic choreographic vision of change, in Hercules & Love Affair video

It’s called “dance music,” but it’s rare to find music, movement, and emotion captured to film. That’s what makes this new music video must-watch.

And it’s all a meditation on “change” – that theme, and the feelings around it, I think make this video something all of us can relate to:

Videos for dance tracks have become so pedestrian that I almost dread anything mentioning a video or video premiere in my inbox. The difference here: Andrew Butler’s Hercules & Love Affair is already a project that integrates dance, and his work here is as director as well as music producer.

Butler (who you see make cameos in the video) teamed up with Joie Iacono, another talented DJ with deep musical sensibilities, to co-produce and co-direct the video. It’s polyglot meets polyglot, each of them having transplanted themselves from central roles in the New York scene to new life in Europe (Mr. Butler to Belgium, Ms. Iacono to Berlin). Joie’s photographic imagination pairs perfectly with Andrew’s groove and dance fantasies.

And the results are simple, but arresting – concise, motivated gestures with occasional stuttered edits to match the music. This sort of thing can quickly become cliche, but veteran choreographer Joshua Hubbard has a uniquely well suited language. He has worked with the likes of Elton John, but that’s not the main thing here – his choreography is irregular, contorted, yet still relentlessly lyrical. So the result is the ability to make haiku-like mysteries out of simple moments.

And well, watch him improvise to see what I mean:

It’s a shame dance music videos can’t always be on this level, so it’s worth taking note when they are.

The track is produced by Andy Butler with Alec Storey as Hercules & Love Affair, and you also get sculptures in the video by Egon Van Herreweghe and Thomas Min.

The EP is out on November 1. More at PAPER, who premiered this:

https://www.papermag.com/hercules-love-affair-change-2640928744.html?rebelltitem=3#rebelltitem3

Hopefully we’ll talk choreography more soon, as it plays a central role in new audiovisual work by Alessandro Cortini, which I caught at Slovenia’s Sonica Festival.

The post A poetic choreographic vision of change, in Hercules & Love Affair video appeared first on CDM Create Digital Music.

Quick! This ffmpeg cheat sheet solves your video, audio conversion needs, for free

Video, audio, convert, extract – once, these tasks were easy with QuickTime Pro, but now it’s gone. ffmpeg to the rescue – any OS, no money required.

It’s Friday, some deadlines (or the weekend) are looming, so seems as good a time as any to share this.

ffmpeg is a free, powerful tool for Mac, Windows, and Linux, with near magical abilities to convert audio and video in all sorts of ways. Even though it’s open source software with a lineage back to the year 2000, it very often bests commercial tools. It does more, better, and faster in a silly number of cases.

There’s just one problem: getting it to solve a particular task often involves knowing a particular command line invocation. You could download a graphical front end, but odds are that’ll just slow you down. So in-the-know media folks invariably make collections of little code bits they find useful.

Coder Jean-Baptiste Jung has saved you the trouble, with a cheat sheet of all the most useful code. And these bear striking resemblance to some of the stuff you used to be able to do in QuickTime Pro before Apple killed it.

19 FFmpeg Commands For All Needs [CatsWhoCode]

And on GitHub: https://gist.github.com/protrolium/e0dbd4bb0f1a396fcb55

There are some particularly handy utilities there involving audio, which is where tools like Adobe’s subscription-only commercial options often fail. (Not to mention Adobe is proving it will cut off some localities based on politics – greetings, Venezuelan readers.)

It’s great stuff. But if you see something missing, put it here, and we’ll make our own little CDM guide.

The post Quick! This ffmpeg cheat sheet solves your video, audio conversion needs, for free appeared first on CDM Create Digital Music.

macOS Catalina is here; Final Cut update, Logic compatibility, who should wait

macOS Catalina is here as a free update today, along with updated information on Apple’s own pro apps. But music users should continue to delay upgrading for now.

I’ve already written about what changes in macOS Catalina, and why many DAWs, plug-ins, and hardware drivers will be incompatible without updates. You can read that full deep dive, which also includes resources on how to backup your system if you do want to upgrade, and how to retrieve previous macOS versions in case you want to upgrade to something like Mojave instead. (Mojave is now very stable, most readers and developers support, meaning a Mac upgrade that lags Apple’s annual upgrade cadence may make sense.) To catch up, check that article here:

The short version: Catalina adds security requirements for installers and software, and removes support for 32-bit code.

This isn’t an argument about whether or not those changes make sense – generally speaking, they do. But basically, if you have any need for stability and compatibility for critical creative work, you probably shouldn’t upgrade today. (And even if you do, you absolutely should back up everything first, and plan in advance how you would roll back the OS if needed.)

In fact, nothing has changed as far as the compatibility situation described in the article. Some developers do have updates ready for their latest software, as in the case of Ableton Live 10.

Most don’t, though, and it might only take one hardware driver or piece of software to ruin your day. Steinberg, for instance, referred back to their September 24 announcement and tell CDM they’ll need more. That illustrates just how fragile this can be – they’re working with Apple on issues involving their Dorico software and the Soft-eLicenser.

There’s also a lot of new technology in this update, meaning that if you really want a stable release, you need to wait anyway, even to give developers ample time to test the final build.

Start scratching off those lotto tickets, and this could be your desk. Final Cut Pro on the new Apple Mac Pro and matching display.

Apple Pro Apps updates

Here’s where I do have some news – Apple’s own pro apps are verified as compatible. (That isn’t necessarily a given, I might add.)

Apple says Logic Pro X and Motion are each compatible as of their most recent updates – Logic’s latest came in July, and Motion in March.

Now note, that does not mean you should expect entirely problem-free operation with Logic. Security changes are such that you could encounter unexpected compatibility issues with plug-ins – we simply can’t know until we have more real world testing data. You can help provide those tests, but you might not want to do it on your one and only production machine – not unless you make a separate external boot drive to run Catalina.

You’ll see in particular a significant notice in Motion that indicates that Apple has removed some deprecated media file support: “Detects media files that may be incompatible with future versions of macOS after Mojave.” (That may be related to 32-bit removals, but yeah, you might want to keep one machine around running an older OS, generally speaking.)

Logic release notes: https://support.apple.com/en-us/HT203718

Motion release notes: https://support.apple.com/en-us/HT202203

Final Cut Pro actually gets a dedicated update, optimized for the newest Apple hardware and software tech, version 10.4.7. You don’t need Catalina to run this latest FCP – Mojave 10.14.6 is the minimum – but you do get some additional functionality unlocked if you pair the latest Final Cut with the latest macOS.

What’s new:

  • A new engine powered on Apple’s Metal graphics API that the company says delivers enhanced performance
  • Specific Mac Pro optimizations, as expected, and support for Apple’s Pro Display XDR hardware
  • Support for the Mac Pro’s Afterburner card
  • Specific support for Sidecar, which lets you use your iPad as a second display (wired or wireless)
  • High dynamic range (HDR) video grading, with color mask and range isolation tools (this may actually be the coolest feature, hidden in the fine print)
  • HDR video is now tone-mapped to compatible displays on Catalina only – and that’s across Motion, Final Cut, and Compressor
  • Select which internal or external GPU you want to use

Apple claims a 20% performance gain for editors on the current 15-inch MacBook Pro or 35% on the iMac Pro, versus the past release.

The important thing here, though, is that you get most of this with macOS Mojave. So I think there’s no huge rush to update – give this one some time so you can, for instance, test out on an external drive before you commit your production system to an OS that could ruin things. And that’s what pros should do anyway.

As always, this is a free update.

https://www.apple.com/newsroom/2019/10/final-cut-pro-x-update-introduces-new-metal-engine-for-increased-performance/

If you have further compatibility information (hello, developers), do let us know.

More on what’s new in macOS:

https://www.apple.com/newsroom/2019/10/macos-catalina-is-available-today/

The post macOS Catalina is here; Final Cut update, Logic compatibility, who should wait appeared first on CDM Create Digital Music.

Video premiere: Caustic alien dreams and handmade electronics from Balfa

Balfa sucks us into dystopian reveries, as we premiere a new video – and see some of the home-built instruments making such wonderfully acerbic sounds.

First, let’s set the mood with the video, which pairs music artist Balfa with artist/animator Maria Mendes of Portugal. Whatever is rendering in that skin, that’s more or less how my body feels if I give it over to these sounds.

It’s for the track, from this month’s debut LP Perfecta Analogía De La Decadencia. (I bet you got the translation of that one. Full track-by-track album description with extended commentary is on his site.)

Balfa is a Spanish artist who has crafted his debut LP, he says, as an autobiographical journey through Berlin over his four year stay here. (It’s true – those screams of agony you hear, that’s exactly the sound my soul makes when I’m stuck just before closing on a Saturday night in the produce section of the Wrangelstrasse Lidl. But I digress.)

What you get is the raw, exposed crackle and growl of electronics, giving way to abstract broken beats and fragmented landscapes. And then, unexpectedly, he’ll break into a furious, hyperactive groove, in between caves of ambient sound. Those occasional repetitions now qualify things to be called “techno,” but frankly – I’m totally okay with the ongoing dissolution of the term, if it means more experimentation.

The album has the unedited directness of late-night studio psychosis, but it’s always engaging and inventive. The full stream is on HATE; there’s a vinyl LP that then spills over on digital with a couple extra tracks, out earlier this month. Delilirium Candidum of Mexico made the artwork, in a sort of naive-folk cyberpunk style:

And the sounds are unpredictable and show this love for electricity in part because of Balfa’s extensive DIY work. Balfa has been building his own instruments, with a decidedly punk approach (as we like around these here parts at CDM).

Most interesting is this “Yafurula Generator, Revelwaver & Clock”:

There are three separate modules here. The main is the Yafurula Generator:

Yafurula Generator is the main piece. A sequencer that contains the power supply for the whole synthesizer. It’s built inside an old German cigarettes box I bought in the studio of a sound artist who makes synthesizers professionally and needed to get rid of a lot of stuff. Its sloping shape make it ideal for working with patch cables.

With 16 connections and 8 cables to patch them in many ways, it creates sequences of different length. It’s not an 8-step sequencer: the pattern length depends on the number of cables connected to each other.

The CLOCK is just that – with a single knob. And there’s a wave generator.

Actually burning out the oscillators is part of the appeal. He explains:

Best part of this one is the sound produced when it gets a pattern from the sequencer. There’s a special knob position that controls the first oscillator’s frequency and produces glitches when the resistor is hitting up too much. The noise produced is fucking great, but doesn’t last long. Each time this happens, the resistor starts to get hot faster and the noise is every time shorter and shorter, until the resistor gets burned and damaged and it needs to be exchanged for a new one.

If you ever share a studio with Balfa, and wonder why your resistors suddenly start disappearing… well…

There’s also this synth built into the shell of a PlayStation controller:

More on that: https://www.blf-lab.com/en/discover-the-album/playstation-synthesizer/

His live performance – as recently at Eufonic festival – is all about handmade devices and improvisation. On the album, it’s nice hearing those untethered textures mixed with song structures and sounds – a compelling split.

I also quite like this little reveal they did for the cover artwork by Delilirium Candidum:

If this goes anywhere, you can say again you heard it here first – this is BLF001. Now that he’s back in Spain, Balfa promises “BLF Lab” will be an outlet for more of this sort of endeavor:

Balfa establishes his BLF Lab project not only as a space for creative experimentation through craft practices, but also as a base for making music with handcrafted machines and various materials.

We’ll be looking forward to what transpires, Balfa.

www.blf-lab.com

More pictures of his DIY work:

Photo: Maria Louceiro.

Photo: Maria Louceiro.

Photo: Marta Rubio.
Photo: Marta Rubio.

The post Video premiere: Caustic alien dreams and handmade electronics from Balfa appeared first on CDM Create Digital Music.

Loraine James’ sound is intense, mixed up, and essential

The newest release for Hyperdub, in its untethered torrent of distorted rhythm, feels personal and liberated – and today, gets one of the more significant recent video releases, to boot.

It’s a fine line to tread, being uncensored but precise, irregular but inevitable. But Kode9’s Hyperdub imprint has a solid track record of finding inventive grooves, and lately has been on a serious role. “Sick 9” is intelligent and intimate all at once, as Loraine James stretches her exceptional rhythmic language, from some deep center.

And I think for anyone wanting to liberate their own production voice, here’s something beautiful – even in the press statement, James says that as she navigated a “queer relationship … and the ups and downs,” that music was a vehicle for expression she couldn’t find elsewhere. She writes: “A lot of the time I’m really scared in displaying any kind of affection in public…This album is more about feeling than about using certain production skills.”

There’s something encouraging about seeing a press statement where the artist says what she does: “I’m in love and wanted to share that in some way.”

But so there’s a message to other artists: you can let that feeling out in the music, without worrying about how skilled others may see it, even when those feelings are hard to share in other ways. I mean, it’s obvious, it’s presumably why people make music – but it’s also obviously something we all can get away from.

‘Sick 9’ is a single now, emblazoned with her holding up a photo of her childhood estate flat, and it’s hard to stop repeating. (I would say something here about “this sick beat” but I don’t want to offend Taylor Swift’s lawyers):

You have to wait until the 20th for the whole release, but then today Loraine dropped her collaboration with rapper Le3 bLACK and an accompanying music video, with glitched-out, crushed beats underneath. It’s powerful stuff, an insistent cry:

The visuals are familiar UK drab and city tropes, but director Pedro Takahashi and DOP Liam Meredith find swooping, lyrical rhythm as handicam videos make you ever so slightly motion sick and small. Le3 bLACK grunts with frustration as the motion’s adagio sways around James’ pounding broken repetition.

As UK and America hang again between our dark pasts and future potential, this seems a time only music can really express.

You’ll be able to get the music on Bandcamp, natch:

https://lorainejames.bandcamp.com/

And more is on Hyperdub’s site:

The post Loraine James’ sound is intense, mixed up, and essential appeared first on CDM Create Digital Music.

Max gets more eye candy: GL3 for Jitter in beta

Calling all GPU instrumentalists – Cycling ’74 is now significantly beefing up Jitter’s graphics engine with support for the latest 3D hardware, in GL3. The result: more eye-popping eye candy in Max.

To be fair, Max is a little behind some more graphics-focused rivals when it comes to latest-and-greatest GPU support. But Max and Jitter present a unique, familiar workflow and features that are in some sense beyond compare. GL3 moves Jitter a bit more in the direction of supporting cool, new features. In exchange, you’re going to need a newer graphics card – integrated-only machines are out, as is older hardware (GPUs from about five years ago or so). But my guess is, if you care about these features, you’re running a newer machine, anyway.

GL3 is the new graphics engine. It’s beta for now, but you can already have a proper play. And because this runs in Max, it also means the possibility of running a Max for Live setup with these graphics inside Ableton Live, which isn’t possible with other environments.

What’s new, to try out in this public beta:

Modern GLSL language support

GPU instancing with jit.gl.multiple and jit.gl.mesh

2D texture input directly to a jit.gl.cubemap face or 3D texture slice

Transform Feedback of vertex data via the new jit.gl.buffer and jit.gl.tf objects. This feature allows you to preserve vertex or geometry shader output for future use as geometry data on the graphics card, opening the door to some highly efficient particle simulations and new creative possibilities that we haven’t thought of yet.

Do you need to be an expert shader coder to take advantage of this stuff? Nope, and even those who are aren’t above a little copy-paste action with Shadertoy, a site with tons of dazzling demos of what clever GLSL code can put on the screen. (I showed how to do this in another stalwart creative development tool beloved by artists, Isadora – see link at bottom.)

It’s nice to see the bleeding-edge stuff “just work” right in Jitter with GL3. That’s some impressive work.

Let us know if you make anything with this (or the previous Jitter engine). Sign up for the public beta:

https://cycling74.com/forums/gl3-package-public-beta

Previously:

The post Max gets more eye candy: GL3 for Jitter in beta appeared first on CDM Create Digital Music.

Making a stage powered by AI: inside GAMMA_LAB

What happens when you apply machine learning research to experimental sound – and then play live in front of a festival crowd? Recently, in St. Petersburg, RU, we got to find out.

Our team for Gamma_LAB AI gathered a diverse international team of artists, musicians, musicologists, coders, and researchers, including people who are deep in the field of data science work outside of the arts. (One of our co-hosts was juggling her work in path finding for drones – so not the usual media art approach to AI!) Organizing team (of which I was the only non-Russian member this time):

  • Natalia Fuchs, Curator
  • Julia Reushenova, Curatorial assistant
  • Helena Nikonole, Conceptual artist
  • Peter Kirn, Facilitator
  • Natalia Soboleva, Facilitator
  • Dr. Konstantin Yakovlev, Scientific advisor

… plus our partners, including tech partner Mail.ru Cloud Solutions.

Step one: come together for a 12-day laboratory, bringing us to St. Petersburg in May. That was our chance to learn from one another, take in some lectures, and get started with experiments – everything from digging through how to reconstruct baroque music to generating new sounds for techno and experimental improvisational performance. Participants came from everywhere from Kenya to just around the corner:

Ksenia Guznova (RU), Ilya Selikhov (DE), Anastasia Tolchneva (RU), Michal Mitro (SK), Mar Canet (ESP), Ilia Symphocat (RU), Thomas Disley (USA), Nikita Prudnikov (RU), Tatiana Zobnina (RU), Joseph Kamaru (KE), Egor Zvezdin (RU), Alexander Kiryanko (RU), Katarina Melik-Ovsepian (RU)

Step two – the big leap – come back to St. Petersburg in July, and in a raw industrial space, make the whole thing work for an audience of festival goers. That led to a full program:

A packed audience, ending in techno sounds and industrial installation (by Stanislav Glazov). Photo: Alexander Sharoff.
  • A live media art performance by co-host Helena Nikonole (hacking into Internet of Things devices in real-time from the stage)
  • An instrumental group of baroque musicians mixing together historical scores and freshly-generated AI libretto and melodies (led by harpsichordist Katarina Melik-Ovsepyan)
  • A mixed acoustic-electronic improv group working with machine learning-produced sounds trained on various experimental sound sources (Ilya Selikhov, Michal Mitro, Symphocat, and KMRU)
  • Live-coding duo with an original AI-powered encoder/decoder, built on the artists’ own recordings (Monekeer + Lovozero)
  • Yours truly making live techno from generative text, AI-generated loops, and style transfer

And all of this took place in a peak-time, Saturday night festival program, set in an apocalyptic looking ex brewery just before its demolition, complete with immersive, responsive lasers and light by Stanislav Glazov (Licht Pfad studio, Berlin).

Here’s the improv group, working live with their materials:

Some audio examples:

Live coded, custom AI from the duo Monekeer + Lovozero.

I spoke with curator Natalia Fuchs (ARTYPICAL), who put together the program with us. Natalia is right now presenting the project to MUTEK Festival in Montreal, and has worked not only as a curator and co-producer of GAMMA, but as an advisor to the current AI show at the Barbican Centre.

CDM: First, let’s put the lab in context – there’s Surgeon on one stage, pounding out techno, but then there’s the results of this laboratory, too. What’s the place of GAMMA_LAB inside Gamma Festival?

Natalia: Gamma_LAB is the heart of experimentation at the festival. We launched the LAB in May 2019 – that was a [big reponsibility] for us, because the LAB was self-funded, without any institutional or technological support. Only after the international open-call was announced, we started to get attention from the different partners that [have now] joined the project. By “responsibility” here I mean our relationship with the artists and the audience – we knew that experimental lab is just the first chapter, and the main message will be the conceptual AI stage at the festival.

What does it mean to have a lab inside a festival, to have a place that is making new stuff?

When programming the festival, we always feel like we want to represent local artists and quality local production. And Gamma_LAB is the cultural production unit for us. We focus the project on new artistic and curatorial solutions, on international collaborations – and that means we keep on track, stay connected, and help the community develop.

Baroque musicians – mixing historical scores with AI-constructed libretto and melodies – joined electronic artists. Photo: Alexander Sharoff.

What has been your relationship to AI as a curator – how would you relate your experience in GAMMA_LAB to your involvement with the Barbican show? CTM Festival? Other projects?

My connection to AI is coming from my general research interests: I am a media art historian and I am deeply concerned by the new media research in relation with AI nowadays. I find it extremely stimulating and exciting – this enormous philosophical quest towards finding the big “other.” So as soon as I started to work closely with Helena Nikonole, conceptual artist of Gamma_LAB, being a peer for her “deus x mchn” project at Rodchenko School in Moscow and advising this artwork to the “Open Codes” exhibition at ZKM Centre for Art and Media in Karlsruhe, I was developing my curatorial approaches to art and AI. Then there were AI-related projects for the Barbican and CTM, but Gamma_LAB for me conceptually throwing my practice back to the Polytech.Science.Art program that I [previously was] curating at the Polytechnic Museum in Moscow. The way we build the processes here including theory, applied studies, performative aspects, it brings same strategy to the next level. In terms of the scale, Gamma_LAB with its connection to the Gamma Festival ([with its] 12000 visitors) has definitely jumped much higher.

Obviously, we know AI is buzzing. But do you feel there’s something unique about this particular set of collaborations – was there a sense that something different happened? In the process itself? In the results?

The engagement of the technical team was very different at the LAB. I think that we found the way to collaborate between disciplines in a way that is interesting for both – technology professionals and media artists. It makes the project very strong, I believe.

Live improv group. Photo: Alexander Sharoff.

There’s lots of curiosity as always about doing projects in Russia. What would you say the relationship of the Russian scene to the international scene is like? I’m certainly grateful for the unique expertise we had; maybe people aren’t so aware of how much technical skill and talent is in our Russian network?

We have had the long period of time when Russian science and technology was subject to control by the government. So internationalization of science is still happening very slowly in Russia. So I don’t think it’s a question of belief, but a question of historical memory. International interest in the technical skill and talent in the Russian network is definitely very strong , but people outside the country know that it was rather impossible to have successful collaboration due to political restrictions. So at the moment, we all have to go through these borders. And Gamma-LAB also supports open communication in the field of science, technology and arts.

The AI workshop began life with the exhibition and the workshop in Berlin – and now you’ve continued on to MUTEK. What’s the longer narrative there? And anything you can talk about as far as where this will go next, or what you hope will happen next with these projects?

The longer narrative is conducting proper artistic research on AI – but with curatorial supervision. Every international festival is interested in the development of cultural production, to expand contemporary culture strategies and be constantly engaged with audience feedback. The more serious collaborative experiences we have, the more profound cultural production is, the more meaningful art experiences can be delivered to the audience. We’re bringing this to the level of collaboration of the festival not only with artistic communities or applied technology makers, but with academic and scientific circles.

My hope is not related to any “next level,” though. I hope it will be the chance to develop a critical approach to AI and the arts. I think there’s no space where people can freely discover and form their own opinions on the AI matters [that compares with] the media art world and festival environments.

Helena, you got to approach joining our team from a different perspective, also haveing worked as a solo media artist. What was your experience?

Helena: The AI Stage… became, from my perspective, one of the most experimental and multi-genre stages at the festival. I showed my piece deus X mchn in the form of performance, which was presented before in a museum in an extremely different environment. Therefore, I thought it was interesting that showing this piece at the festival, I wasn’t planning to serve the expectations of some part of the audience, but then I realized that actually it was the feature of the stage.

Helena’s project has seen exhibition presentations before – but now it also got to share a festival stage, live in front of an audience, with uncertain and near-realtime results.

All performances, from baroque to noisy improvisation, from digital art to live coding performance could be shown in a museum, as well, and for me, the AI Stage was the best example of how a music festival can become a space for new media art and sophisticated experiments in sound and music. And yes, the audience was just awesome! Of course, some part of it were more used to going to raves than centers for contemporary art, but even these people were genuinely interested in what was happening at the stage, so finally, I was really surprised that sometimes a rave can also educate the audience.

https://gammafestival.ru/ [EN/RU]

http://artypical.com/

Photo: Nikita Grushevsky.
Photo: Nikita Grushevsky.

The post Making a stage powered by AI: inside GAMMA_LAB appeared first on CDM Create Digital Music.

Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke

Light and sound, space and music – Christopher Bauder and Robert Henke continue to explore immersive integrated AV form. Here’s a look into how they create, following a new edition of their piece Deep Web.

Here’s an extensive interview with the two artists by EventElevator, including some gorgeous footage from Deep Web.

Deep Web premiered in 2016 at CTM Festival, but it returned this summer to the space for which it was created, Berlin’s Kraftwerk (a former power plant). And because both artists are such obsessive perfectionists – in technology, in formal refinement – it’s worth this second trip.

Christopher (founder of kinetic lighting firm WHITEvoid) and Robert (also known as Monolake and co-creator of Ableton Live) have worked together for a long time. A decade ago, I got to see (and document) ATOM at MUTEK in Montreal, which in some sense would prove a kind of study for a work like Deep Web. ATOM tightly fused sound and light, as mechanically-controlled balloons formed different arrangements in space. The array of balloons became almost like a kind of visualized three-dimensional sequencer.

Deep Web is on a grander scale, but many of the basic elements remain – winches moving objects, lights illuminating objects, spatial arrangements, synchronized sound and light, a free-ranging and percussive musical score with an organic, material approach to samples reduced to their barest elements and then rearranged. The dramaturgy is entirely abstract – a kind of narrative about an array and its volumetric transformations.

In Deep Web, color and sound create the progression of moods. At the live show I saw last weekend, Robert, jazzed on performance endorphins, was glad to chat at length with some gathered fans about his process. The “Deep Web” element is there, as a kind of collage of samples of information age collapsed geography. The sounds are disguised, but there are bits of cell phones, telecommunications ephemera, airport announcements, made into a kind of encoded symphony.

Photo: Ralph Larmann.
Photo: Ralph Larmann.

Whether you buy into this seems down to whether the artists’ particular take tickles your synesthesia and strikes some emotional resonance. But there is something balletic about this precise fusion of laser lines and globes, able to move freely through the architecture. Kraftwerk will again play host later this month to Atonal Festival, and that meeting of music and architecture is by contrast essentially about the void. One somber vertical projection rises like a banner behind the stage, and the vacated power plant is mostly empty vibrating air. Deep Web by contrast occupies electrifies that unused volume.

I spoke to Christopher to find out more about how the work has evolved and is executed.

Christopher and Robert at the helm of the show’s live AV controls. Photo: Christopher Bauder.
Lasers and equipment, from the side. Photo: Peter Kirn.

Robert’s music is surprisingly improvisational in the live performance versions of the piece. You could feel that the night I was there – even as Robert’s style is as always reserved, there’s a sense of flowing expression.

To create these delicate arrangements of lit globes and laser lines, Christopher and his team at WHITEvoid plan extensively in Rhino and Vektorworks – the architectural scoring that comes before the performance. The visual side is controlled with WHITEvoid’s own kinetic control software, KLC, which is based on the industry leading visual development / dataflow environment TouchDesigner.

Robert’s rig is Ableton Live, controlled by fader and knob boxes. There is a master timeline in Live – that’s the timeline bit to which Robert refers, and it is different from his usual performance paradigm as I understand it. That timeline in turn has “loads of automation parameters” that connect from Live’s music arrangement to TouchDesigner’s visual control. But Robert can also change and manipulate these elements as he plays, with the visuals responding in time.

The machines that make the magic happen. Photo: Christopher Bauder.
Photo: Christopher Bauder.

Different visual scenes load as presets. Each preset then has different controllable parameters – most have ten with realtime operation, Christopher tells CDM.

“[Visual parameters] can be speeds, colors, selection of lasers, individual parameters like seed number, range, position, etc.,” Christopher says. “In one scene, we are linking acceleration of a continuously running directional laser pattern to a re-trigger of a beat. So options are virtually endless. It’s almost never just on/off for anything – very dynamic.”

This question of light and space as instrument I think merits deeper and broader explanation. WHITEvoid are one of a handful of firms and artists developing that medium, both artistically and technically, in a fairly tight knit community of people around the world. Stay tuned; I hope to pay them another visit and talk to some of the other artists working in this direction.

You can check their work (and their tech) at their site:

https://www.whitevoid.com/

Christopher also provided some unique behind-the-scenes shots for us here, along with some images that reveal some of the attention to pattern and form.

The Red Balloon. Photo: Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Ralph Larmann.
Christopher Bauder.
Peter Kirn.

Previously:

The post Inside the immersive kinetic laser sound world of Christopher Bauder, Robert Henke appeared first on CDM Create Digital Music.

Climate crisis, shown directly on power plant, in guerrilla projection

In the Czech Republic, one artistic intervention made the invisible visible, by laser “tagging” a coal-fired power plant with the damage it does to our planet’s fragile climate.

Live visuals are in many ways the perfect protest – visible, large scale, and able to intervene from a distance without harm. That opens radical and political possibilities for their message, even as media art tools are often the domain of corporate gigs.

The scene here takes us to the industrial central Czech Republic, and the Chvaletice power plant, where North Bohemia’s brown coal is burned for power production. Coal, of course, is dirty stuff. That makes this power plant a major carbon emitter and climate change contributor, as well as a devastating threat to health, belching mercury and other toxins into the air. And while Europe may seem a haven for environmental policy, Czech is set to fail its Paris Climate Agreement obligations, if it can’t kick the coal habit.

There’s reason to single out this plant. The Chvaletice plant was given an exemption that lets is continue to operate even with a coming 2021 cap on carbon dioxide and mercury. Those caps in turn are necessary to incentive alternative energy sources for meeting Czech electricity consumption. So this isn’t just a random target – it’s on the front lines of breathable air and climate change policy in a material way.

Media artist Gabriela Prochazka and Lunchmeat Studio (who also produce Prague’s Lunchmeat Festival) made statements by running lasers on top of the cooling towers and its exhaust. That included messages like “STOP COAL”, “#NOFILTER”, “NOT COOL”, and, in a reference to rising planetary air temps, “+2°C.” (If those cooling towers remind you of nuclear plants, not coal, well, that’s because both methods basically run on steam – but I digress, you can go to Wikipedia for that.)

http://gabrielaprochazka.com/
http://www.lunchmeat.cz/

Sponsors:
Limity Jsme My (We Are The Limits)
Greenpeace CZ

Photos:
Petr Zewkak Vrabec, Martin Janousek

Something like a power plant can easily fade into the background of the world around us. This seems an effective way to use our tools to transform that perception.

The post Climate crisis, shown directly on power plant, in guerrilla projection appeared first on CDM Create Digital Music.