Devious Machines Texture multi-fx plugin on sale for $59 USD

Devious Machines Texture 38 OFF sale

Plugin Boutique has launched a sale on the Texture effect plugin by Devious Machines, offering a 38% discount for a limited time as part of its 12 Days of Christmas promotion. Texture comes with over 340 sampled, granular and generative sound sources to enhance, shape and transform your sounds. Drawing from a library of over […]

The post Devious Machines Texture multi-fx plugin on sale for $59 USD appeared first on rekkerd.org.

More surprise in your sequences, with ESQ for Ableton Live

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.

K-Devices releases ESQ for Ableton Live, Next-Gen Patterns Beatstation

K-Devices ESQ UI and window

K-Devices has announced a new Max For Live MIDI tool designed for the generation and advanced manipulation of patterns and beats. A new addition to the K-Devices Out of the Grid (OOG) series, ESQ is based on several sound synthesis techniques, adapted to a standard step sequencer to deliver incredible new flexibility. Each application in […]

The post K-Devices releases ESQ for Ableton Live, Next-Gen Patterns Beatstation appeared first on rekkerd.org.

EVAbeat launches MelodySauce A.I. Melody Collaborator VST/AU

EVAbeat MelodySauce

EVAbeat has announced the release of MelodySauce, a powerful VST MIDI melody creator plugin for Windows and Mac. The plugin features a completely redesigned interface, new creation controls and improved generative algorithms. MelodySauce is a VST plugin that uses advanced generative algorithms to help you create instrumental melodies as MIDI in your DAW. A co-creation […]

The post EVAbeat launches MelodySauce A.I. Melody Collaborator VST/AU appeared first on rekkerd.org.

Save over 90% off Glitchmachines Cataract segment multiplexer plugin!

Glitchmachines Cataract 90 offPlugin Boutique has launched an exclusive sale on the Cataract segment multiplexer for electronic music production and experimental sound design by Glitchmachines. Cataract features an arsenal of sample scanners with integrated modulation sequencers, generative parameters and various morphing functions. This makes it possible to construct architecturally complex patterns ranging from nuanced percussive articulations to intricate […]

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.

The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.

And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.

This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)

Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.

(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)

Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)

Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.

Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):

Full video:

2018 participants – all amazing artists, and ones to watch:

Adrien Bitton
Alex Alexopoulos (Wild Anima)
Andreas Dzialocha
Anna Kamecka
Aziz Ege Gonul
Camille Lacadee
Carlo Cattano
Carlotta Aoun
Claire Aoi
Damian T. Dziwis
Daniel Kokko
Elias Najarro
Gašper Torkar
Islam Shabana
Jason Geistweidt
Joshua Peschke
Julia del Río
Karolina Karnacewicz
Marylou Petot
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Sarah Martinus
Thomas Haferlach

https://www.ctm-festival.de/archive/festival-editions/ctm-2018-turmoil/transfer/musicmakers-hacklab/

http://ioannmaria.com/

For some of the conceptual and research background on these topics, check out the Input sessions we hosted. (These also clearly inspired, frightened, and fired up our participants.)

A look at AI’s strange and dystopian future for art, music, and society

Minds, machines, and centralization: AI and music

The post What culture, ritual will be like in the age of AI, as imagined by a Hacklab appeared first on CDM Create Digital Music.

AutoTrig and TATAT generate rhythms for Ableton, modular gear

Composer Alessio Santini is back with more tools for Ableton Live, both intended to help you get off the grid and generate elaborate, insane rhythms.

Developer K-Devices, Santini’s music software house, literally calls this series “Out Of Grid,” or OOG for short. They’re a set of Max for Live devices with interfaces that look like the flowcharts inside a nuclear power plant, but the idea is all about making patterns.

AutoTrig: multiple tracks of shifting structures and grooves, based on transformation and probability, primarily for beat makers. Includes Push 2, outboard modular/analog support.

TATAT: input time, note, and parameter structures, output melodic (or other) patterns. Control via MIDI keyboard, and export to clips (so you can dial up settings until you find some clips you like, then populate your session with those).

AutoTrig spits out multiple tracks of rhythms for beat mangling.

And for anyone who complains that rhythms are repetitive, dull, and dumb on computers, these tools do none of that. This is about climbing into the cockpit of an advanced alien spacecraft, mashing some buttons, and then getting warped all over hyperspace, your face melting into another dimension.

Here’s the difference: those patterns are generated by an audio engine, not a note or event engine per se. So the things you’d do to shape an audio signal – sync, phase distortion – then spit out complex and (if you like) unpredictable streams of notes or percussion, translating that fuzzy audio world into the MIDI events you use elsewhere.

TATAT is built more for melodic purposes, but the main thing here is, you can spawn patterns using time and note structures. And you can even save the results as clips.

And that’s only if you stay in the box. If you have some analog or modular gear, you can route audio to those directly, making Ableton Live a brain for spawning musical events outside via control voltage connection. (Their free MiMu6 Max for Live device handles this, making use of the new multichannel support in Max for Live added to Live 10).

Making sense of this madness are a set of features to produce some order, like snapshots and probability switches on AutoTrig, and sliders that adjust timing and probability on TATAT. TATAT also lets you use a keyboard to set pitch, so you can use this more easily live.

If you were just sent into the wilderness with these crazy machines, you might get a bit lost. But they’ve built a pack for each so you can try out sounds. AutoTrig works with a custom Push 2 template, and TATAT works well with any MIDI controller.

Pricing:
AutoTrig 29€ ($34 US)
TATAT 29€ ($34 US)
Bundle AutoTrig + TATAT 39€ ($45 US)

Bundle MOOR + Twistor + AutoTrig + TATAT 69€ ($81)

They’ve presumably already worked out that this sort of thing will appeal mainly to the sorts of folks who read CDM, as they’ve made a little discount coupon for us.

The code is “koog18”

Enter that at checkout, and your pricing is reduced to 29€ ($34 US) for both AutoTrig and TATAT.

Check out their stuff on the K-Devices site:

OOG part 2: AutoTrig and TATAT, lunatic Max For Live devices

https://k-devices.com/

See, the problem with this job is, I find a bunch of stuff that would require me to quit this job to use but … I will find a way to play with Monday’s sequencing haul! I know we all feel the same pain there.

Here we go in videos:

The post AutoTrig and TATAT generate rhythms for Ableton, modular gear appeared first on CDM Create Digital Music.

A look at AI’s strange and dystopian future for art, music, and society

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate

https://www.bell-labs.com/var/articles/discussion-sougwen-chung-about-human-robotic-collaborations/

Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:

OUR FRIENDS ELECTRIC

Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.

https://www.ctm-festival.de/news/

The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

Ableton launches Creative Extensions for Live 10 Suite

Ableton Live 10 Suite Creative ExtensionsAbleton has released Creative Extensions, a new Pack for Live 10 Suite that stretches sound creation and transformation possibilities in Live to new boundaries. Eight new tools that add punch, color and texture to Live – Creative Extensions is a new Pack included in Live 10 Suite made for experimentation, sound processing and generative composition. […]

How to try GPU-accelerated live visuals in a few steps, for free

The growing power of gaming architectures for visuals has a side benefit: it can produce elaborate visuals without touching the CPU, which is busy on musicians’ machines dealing with sound.

But how do you go about exploring some of that power? The code language spoken natively by the GPU is a little frightening at first. Fortunately, you can actually have a play in a few minutes. It’s easy enough that I prepared this lightning tutorial:

I shared this with the #RazerMusic program as it’s in fact a good artistic application for laptops with gaming architectures – and it’s terrific having that NVIDIA GTX 1060 with 6 GB of memory. (This example can’t even begin to show that off, in fact.) These steps will work on the Mac, too, though.

I’m stealing a demo here. Isadora creator Mark Coniglio showed off his team’s GLSL support more or less like this when they unveiled the feature at the Isadora Werkstatt a couple of summers ago. But Isadora, while known among a handful of live visualists and people working with dance and theater tech, itself I think is underrated. And sure enough, this support makes the powers of GLSL friendly to non-programmers. You can grab some shader code and then modify parameters or combine with other effects, modular style, without delving into the code itself. Or if you are learning (or experienced, even) with GLSL, Isadora provides an uncommonly convenient environment to work with graphics-accelerated generative visuals and effects.

If you’re not quite ready to commit to the tool, Isadora has a full-functioning demo version so you can get this far – and look around and decide if buying a license is right for you. What I do like about it is, apart from some easy-to-use patching powers, Isadora’s scene-based architecture works well in live music, theater, dance, and other performance arts. (I still happily use it alongside stuff like Processing, Open Frameworks, and Touch Designer.)

There is a lot of possibility here. And if you dig around, you’ll see pretty radically different aesthetics are possible, too.

Here’s an experiment also using mods to the GLSL facility in Isadora, by Czech artist Gabriela Prochazka (as I jam on one of my tunes live).

Resources:

https://troikatronix.com/

https://www.shadertoy.com/

Planning to do more like this, so open to requests!

The post How to try GPU-accelerated live visuals in a few steps, for free appeared first on CDM Create Digital Music.