In Session Audio releases Riff Generation: Outside In Edition for Kontakt

In Session Audio Riff Generation Outside In Edition

In Session Audio has announced the release of the Riff Generation: Outside In Edition, a Kontakt Player library that creates song parts by combining acoustic, electric and synthetic sounds with effects. Riff Generation: Outside In Edition features all new sampled material. Based around a set of musical parameters that you control, Riff Generation: Outside In […]

The post In Session Audio releases Riff Generation: Outside In Edition for Kontakt appeared first on rekkerd.org.

How to patch 3D visuals in browser from Ableton Live, more with cables.gl

Now, even your browser can produce elaborate, production-grade eye candy using just some Ableton Live MIDI clock. The question of how to generate visuals to go with music starts to get more and more interesting answers.

And really, why not? In that moment of inspiration, how many of us see elaborate fantastic imagery as we listen to (or dream about) music. It’s just been that past generative solutions were based on limited rules, producing overly predictable results. (That’s the infamous “screensaver” complaint.) But quietly, even non-gaming machines have been adding powerful 3D visualization – and browsers now have access to hardware acceleration for a uniform interface.

cables.gl remains in invite-only beta, though if you go request one (assuming this article doesn’t overwhelm one), you can find your way in. And for now, it’s also totally free, making this a great way to play around. (Get famous, get paid, buy licenses for this stuff – done.)

MIDI clock can run straight into the browser, so you can sync visuals easily with Ableton Live. (Ableton Link is overkill for that application, given that visuals run at framerate.) That will work with other software, hardware, modular, whatever you have, too.

For a MIDI/DJ example, here’s a tutorial for TRAKTOR PRO. Obviously this can be adapted to other tools, as well. (Maybe some day Pioneer will even decide to put MIDI clock on the CDJ. One can dream.)

They’ve been doing some beautiful work in tutorials, too, including WeaveArray and ColorArray, since I last checked in.

Check out the full project and request an invite:
https://cables.gl/

By the way, note those cool visuals at the top. That’s not video – that’s cables.gl actually running in your browser right now.

Previously, our introduction:

The post How to patch 3D visuals in browser from Ableton Live, more with cables.gl appeared first on CDM Create Digital Music.

An injury left Olafur Arnalds unable to play, so he turned to machines

Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.

It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:

This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.

And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.

With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)

I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.

Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.

The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.

You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:

re:member debuted the software:

There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)

Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.

I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.

I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.

Here’s a more intimate performance, including a day without Internet:

And lastly, more detail on the software:

Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!

olafurarnalds.com

Previously:

The KellyCaster reveals what accessibility means for instruments

The post An injury left Olafur Arnalds unable to play, so he turned to machines appeared first on CDM Create Digital Music.

In Adversarial Feelings, Lorem explores AI’s emotional undercurrents

In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.

Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.

And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:

Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.

Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.

And what about achieving intimacy with nets? He explains:

Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.

I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.

So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?

Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.

On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.

What data were you training on for the musical patterns?

MIDI – basically I trained the NN on patterns I create.

And wait, SysEx, what? What were you doing with that?

Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.

What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?

I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…

How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?

To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.

What was the role of instrument builders – what are we hearing in the sound, then?

Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.

I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?

I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.

You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?

It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.

The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?

I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.

More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.

What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?

I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.

Live in Berlin, 7 June

Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.

KEYS: Artificial Intelligence | Lexachast • Lorem • N1L & more [Facebook event]

Lorem project lives here:

http://www.studio-frames.com

The post In Adversarial Feelings, Lorem explores AI’s emotional undercurrents appeared first on CDM Create Digital Music.

Save over 90% off Glitchmachines Cataract segment multiplexer plugin!

Glitchmachines Cataract sale

Plugin Boutique has launched an exclusive sale on the Cataract segment multiplexer for electronic music production and experimental sound design by Glitchmachines. Cataract features an arsenal of sample scanners with integrated modulation sequencers, generative parameters and various morphing functions. This makes it possible to construct architecturally complex patterns ranging from nuanced percussive articulations to intricate […]

The post Save over 90% off Glitchmachines Cataract segment multiplexer plugin! appeared first on rekkerd.org.

Automated techno: Eternal Flow generates dance music for you

Techno, without all those pesky human producers? Petr Serkin’s Eternal Flow is a generative radio station – and even a portable device – able to make endless techno and deep house variations automatically.

You can run a simple version of Eternal Flow right in your browser:

https://eternal-flow.ru/

Recorded sessions are available on a SoundCloud account, as well:

But maybe the most interesting way to run this is in a self-contained portable device. It’s like a never-ending iPod of … well, kind of generic-sounding techno and deep house, depending on mode. Here’s a look at how it works; there’s no voiceover, but you can turn on subtitles for additional explanation:

There are real-world applications here: apart from interesting live performance scenarios, think workout dance music that follows you as you run, for example.

I talked to Moscow-based artist Petr about how this works. (And yeah, he has his own deep house-tinged record label, too.)

“I used to make deep and techno for a long period of time,” he tells us, “so I have some production patterns.” Basically, take those existing patterns, add some randomization, and instead of linear playback, you get material generated over a longer duration with additional variation.

There was more work involved, too. While the first version used one-shot snippets, “later I coded my own synth engine,” Petr tells us. That means the synthesized sounds save on sample space in the mobile version.

It’s important to note this isn’t machine learning – it’s good, old-fashioned generative music. And in fact this is something you could apply to your own work: instead of just keeping loads and loads of fixed patterns for a live set, you can use randomization and other rules to create more variation on the fly, freeing you up to play other parts live or make your recorded music less repetitive.

And this also points to a simple fact: machine learning doesn’t always generate the best results. We’ve had generative music algorithms for many years, which simply produce results based on simple rules. Laurie Spiegel’s ground-breaking Magic Mouse, considered by many to be the first-ever software synth, worked on this concept. So, too, did the Brian Eno – Peter Chilvers “Bloom,” which applied the same notion to ambient generative software and became the first major generative/never-ending iPhone music app.

By contrast, the death metal radio station I talked about last week works well partly because its results sound so raunchy and chaotic. But it doesn’t necessarily suit dance music as well. Just because neural network-based machine learning algorithms are in vogue right now doesn’t mean they will generate convincing musical results.

I suspect that generative music will freely mix these approaches, particularly as developers become more familiar with them.

But from the perspective of a human composer, this is an interesting exercise not necessarily because it puts yourself out of a job, but that it helps you to experiment with thinking about the structures and rules of your own musical ideas.

And, hey, if you’re tired of having to stay in the studio or DJ booth and not get to dance, this could solve that, too.

More:

http://eternal-flow.ru/

Now ‘AI’ takes on writing death metal, country music hits, more

Thanks to new media artist and researcher Helena Nikonole for the tip!

The post Automated techno: Eternal Flow generates dance music for you appeared first on CDM Create Digital Music.

Re-Compose releases updated versions of I2C8 and Spexx

Re Compose I2C8

Re-Compose has announced updates to its first VST/AU plugins, released in December 2018. Since the initial release, a number of new versions have been released that contain a lot of improvements under the skin. I2C8 features two new functionalities around step sequencing and chord velocity. In version 1.0.4, now available for download, we’re introducing the […]

The post Re-Compose releases updated versions of I2C8 and Spexx appeared first on rekkerd.org.

Immerse yourself in the full live AV concert by raster’s Belief Defect

Computer and modular machine textures collide with explosions of projected particles and glitching colored textures. Now the full concert footage of the duo Belief Defect (on Raster) is out.

It’s tough to get quality full-length live performance video – previously writing about this performance I had to refer to a short excerpt; a lot of the time you can only say “you had to be there” and point to distorted cell phone snippets. So it’s nice to be able to watch a performance end-to-end from the comfort of your chair.

Transport yourself to the dirigible-scaled hollowed-out power plant above Kraftwerk (even mighty Tresor club is just the basement), from Atonal Festival. It’s a set that’s full of angry, anxious, crunchy-distorted goodness:

(Actually even having listened to the album a lot, it’s nice to sit and retrace the full live set and see how they composed/improvised it. I would say record your live sets, fellow artists, except I know about how the usual Recording Curse works – when the Zoom’s batteries are charged up and the sound isn’t distorted and you remember to hit record is so often … the day you play your worst. They escaped this somehow.)

And Belief Defect represent some of the frontier of what’s possible in epic, festival mainstage-sized experimentalism, both analog and digital, sonic and visual. I got to write extensively about their process, with some support from Native Instruments, and more in-depth here:

BELIEF DEFECT ON THEIR MASCHINE AND REAKTOR MODULAR RIG [Native Instruments blog]

— with more details on how you might apply this to your own work:

What you can learn from Belief Defect’s modular-PC live rig

While we’re talking Raster label – the label formerly Raster-Noton before it again divided so Olaf Bender’s Raster and Carsten Nicolai’s Noton could focus on their own direction – here’s some more. Dasha Rush joined Electronic Beats for a rare portrait of her process and approach, including the live audiovisual-dance collaboration with dancer/choreographer Valentin Tszin and, on visuals, Stanislav Glazov. (Glazov is a talented musician, as well, producing and playing as Procedural aka Prcdrl, as well as a total Touch Designer whiz.)

And Dasha’s work, elegantly balanced between club and experimental contexts with every mix between, is always inspired.

Here’s that profile, though I hope to check in more shortly with how Stas and Valentin work with Kinect and dance, as well as how Stas integrates visuals with his modular sound:

The post Immerse yourself in the full live AV concert by raster’s Belief Defect appeared first on CDM Create Digital Music.

Devious Machines Texture multi-fx plugin on sale for $59 USD

Devious Machines Texture 38 OFF sale

Plugin Boutique has launched a sale on the Texture effect plugin by Devious Machines, offering a 38% discount for a limited time as part of its 12 Days of Christmas promotion. Texture comes with over 340 sampled, granular and generative sound sources to enhance, shape and transform your sounds. Drawing from a library of over […]

The post Devious Machines Texture multi-fx plugin on sale for $59 USD appeared first on rekkerd.org.

More surprise in your sequences, with ESQ for Ableton Live

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.