Plugin Boutique has launched a sale on the Texture effect plugin by Devious Machines, offering a 38% discount for a limited time as part of its 12 Days of Christmas promotion. Texture comes with over 340 sampled, granular and generative sound sources to enhance, shape and transform your sounds. Drawing from a library of over […]
With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.
You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.
ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.
Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.
There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.
K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.
And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.
It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.
K-Devices has announced a new Max For Live MIDI tool designed for the generation and advanced manipulation of patterns and beats. A new addition to the K-Devices Out of the Grid (OOG) series, ESQ is based on several sound synthesis techniques, adapted to a standard step sequencer to deliver incredible new flexibility. Each application in […]
EVAbeat has announced the release of MelodySauce, a powerful VST MIDI melody creator plugin for Windows and Mac. The plugin features a completely redesigned interface, new creation controls and improved generative algorithms. MelodySauce is a VST plugin that uses advanced generative algorithms to help you create instrumental melodies as MIDI in your DAW. A co-creation […]
Plugin Boutique has launched an exclusive sale on the Cataract segment multiplexer for electronic music production and experimental sound design by Glitchmachines. Cataract features an arsenal of sample scanners with integrated modulation sequencers, generative parameters and various morphing functions. This makes it possible to construct architecturally complex patterns ranging from nuanced percussive articulations to intricate […]
Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.
The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.
And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.
This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)
Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.
(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)
Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)
Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.
Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):
2018 participants – all amazing artists, and ones to watch:
Alex Alexopoulos (Wild Anima)
Aziz Ege Gonul
Damian T. Dziwis
Julia del Río
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Composer Alessio Santini is back with more tools for Ableton Live, both intended to help you get off the grid and generate elaborate, insane rhythms.
Developer K-Devices, Santini’s music software house, literally calls this series “Out Of Grid,” or OOG for short. They’re a set of Max for Live devices with interfaces that look like the flowcharts inside a nuclear power plant, but the idea is all about making patterns.
AutoTrig: multiple tracks of shifting structures and grooves, based on transformation and probability, primarily for beat makers. Includes Push 2, outboard modular/analog support.
TATAT: input time, note, and parameter structures, output melodic (or other) patterns. Control via MIDI keyboard, and export to clips (so you can dial up settings until you find some clips you like, then populate your session with those).
AutoTrig spits out multiple tracks of rhythms for beat mangling.
And for anyone who complains that rhythms are repetitive, dull, and dumb on computers, these tools do none of that. This is about climbing into the cockpit of an advanced alien spacecraft, mashing some buttons, and then getting warped all over hyperspace, your face melting into another dimension.
Here’s the difference: those patterns are generated by an audio engine, not a note or event engine per se. So the things you’d do to shape an audio signal – sync, phase distortion – then spit out complex and (if you like) unpredictable streams of notes or percussion, translating that fuzzy audio world into the MIDI events you use elsewhere.
TATAT is built more for melodic purposes, but the main thing here is, you can spawn patterns using time and note structures. And you can even save the results as clips.
And that’s only if you stay in the box. If you have some analog or modular gear, you can route audio to those directly, making Ableton Live a brain for spawning musical events outside via control voltage connection. (Their free MiMu6 Max for Live device handles this, making use of the new multichannel support in Max for Live added to Live 10).
Making sense of this madness are a set of features to produce some order, like snapshots and probability switches on AutoTrig, and sliders that adjust timing and probability on TATAT. TATAT also lets you use a keyboard to set pitch, so you can use this more easily live.
If you were just sent into the wilderness with these crazy machines, you might get a bit lost. But they’ve built a pack for each so you can try out sounds. AutoTrig works with a custom Push 2 template, and TATAT works well with any MIDI controller.
See, the problem with this job is, I find a bunch of stuff that would require me to quit this job to use but … I will find a way to play with Monday’s sequencing haul! I know we all feel the same pain there.
Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.
Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.
I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.
Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.
And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.
All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.
These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.
Let’s have a look at our four speakers.
Machine learning and neural networks
Moritz Simon Geist: speculative futures
Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.
Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs
Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.
In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.
“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”
Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)
Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.
What if self-transformation – or even fame – were in a pill?
Gene Cogan: future dystopias
Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.
Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music
Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.
“This is probably going to be the most depressing talk.”
In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.
He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:
“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”
References: GRUV, a generative model for producing music
WaveNet, the DeepMind tech being used by Google for audio
Who he is:A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist
Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom
Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.
“You are not working them; they are working you.”
As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:
“We can’t get access or critique; they’re made in places that resemble prisons.”
The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:
“[It] isn’t a constant; it’s really about power and space.”
Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.
Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.
What John Cage can teach us: silence is never neutral, and neither is data.
Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)
But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:
“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”
Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.
It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.
Ableton has released Creative Extensions, a new Pack for Live 10 Suite that stretches sound creation and transformation possibilities in Live to new boundaries. Eight new tools that add punch, color and texture to Live – Creative Extensions is a new Pack included in Live 10 Suite made for experimentation, sound processing and generative composition. […]
The growing power of gaming architectures for visuals has a side benefit: it can produce elaborate visuals without touching the CPU, which is busy on musicians’ machines dealing with sound.
But how do you go about exploring some of that power? The code language spoken natively by the GPU is a little frightening at first. Fortunately, you can actually have a play in a few minutes. It’s easy enough that I prepared this lightning tutorial:
I shared this with the #RazerMusic program as it’s in fact a good artistic application for laptops with gaming architectures – and it’s terrific having that NVIDIA GTX 1060 with 6 GB of memory. (This example can’t even begin to show that off, in fact.) These steps will work on the Mac, too, though.
I’m stealing a demo here. Isadora creator Mark Coniglio showed off his team’s GLSL support more or less like this when they unveiled the feature at the Isadora Werkstatt a couple of summers ago. But Isadora, while known among a handful of live visualists and people working with dance and theater tech, itself I think is underrated. And sure enough, this support makes the powers of GLSL friendly to non-programmers. You can grab some shader code and then modify parameters or combine with other effects, modular style, without delving into the code itself. Or if you are learning (or experienced, even) with GLSL, Isadora provides an uncommonly convenient environment to work with graphics-accelerated generative visuals and effects.
If you’re not quite ready to commit to the tool, Isadora has a full-functioning demo version so you can get this far – and look around and decide if buying a license is right for you. What I do like about it is, apart from some easy-to-use patching powers, Isadora’s scene-based architecture works well in live music, theater, dance, and other performance arts. (I still happily use it alongside stuff like Processing, Open Frameworks, and Touch Designer.)
There is a lot of possibility here. And if you dig around, you’ll see pretty radically different aesthetics are possible, too.
Here’s an experiment also using mods to the GLSL facility in Isadora, by Czech artist Gabriela Prochazka (as I jam on one of my tunes live).