This free Ableton Live device makes images into wavetables

It’s the season of the wavetable – again. With Ableton Live 10.1 on the horizon and its free Wavetable device, we’ve got yet another free Max for Live device for making sound materials – and this time, you can make your wavetables from images.

Let’s catch you up first.

Ableton Live 10.1 will bring Wavetable as a new instrument to Standard and Suite editions – arguably one of the bigger native synth editions to Live in its history, ranking with the likes of Operator. And sure, as when Operator came out, you already have plug-ins that do the same; Ableton’s pitch is as always their unique approach to UI (love it or hate it), and integration with the host, and … having it right in the box:

Ableton Live 10.1: more sound shaping, work faster, free update

Earlier this week, we saw one free device that makes wavetables for you, built as a Max for Live device. (Odds are anyone able to run this will have a copy of Live with Wavetable in it, since it targets 10.1, but it also exports to other tools). Wave Weld focuses on dialing in the sounds you need and spitting out precise, algorithmic results:

Generate wavetables for free, for Ableton Live 10.1 and other synths

One thing Wave Weld cannot do, however, is make a wavetable out of a picture of a cat.

For that, you want Image2Wavetable. The name says it all: it generates wavetable samples from image data.

This means if you’re handy with graphics software, or graphics code like Processing, you can also make visual patterns that generate interesting wavetables. It reminds me of my happy hours and hours spent using U+I Software’s ground-breaking MetaSynth, which employs some similar concepts to build an entire sound laboratory around graphic tools. (It’s still worth a spin today if you’ve got a Mac; among other things, it is evidently responsible for those sweeping digital sounds in the original Matrix film, I’m told.)

Image2Wavetable is new, the creation of Dillon Bastan and Carlo Cattano – and there are some rough edges, so be patient and it sounds like they’re ready to hear some feedback on how it works.

But the workflow is really simple: drag and drop image, drag and drop resulting wavetable into the Wavetable instrument.

Okay, I suspect I know what I’m doing for the rest of the night.

Image2Wavetable Device [maxforlive.com]

The post This free Ableton Live device makes images into wavetables appeared first on CDM Create Digital Music.

Generate wavetables for free, for Ableton Live 10.1 and other synths

Wavetables are capable of a vast array of sounds. But just dumping arbitrary audio content into a wavetable is unlikely to get the results you want. And that’s why Wave Weld looks invaluable: it makes it easy to generate useful wavetables, in an add-on that’s free for Max for Live.

Ableton Live users are going to want their own wavetable maker very soon. Live 10.1 will add Wavetable, a new synth based on the technique. See our previous preview:

Ableton Live 10.1: more sound shaping, work faster, free update

Live 10.1 is in public beta now, and will be free to all Live 10 users soon.

So long as you have Max for Live to run it, Wave Weld will be useful to other synths, as well – including the developer’s own Wave Junction.

Because wavetables are periodic by their very nature, it’s more likely helpful to generate content algorithmically than just dump sample content of your own. (Nothing against the latter – it’s definitely fun – but you may soon find yourself limited by the results.)

Wave Wend handles generating those materials for you, as well as exporting them in the format you need.

1. Make the wavetable: use waveshaping controls to dial in the sound materials you want.

2. Build up a library: adapt existing content or collect your own custom creations.

3. Export in the format you need: adjusting the size les you support Live 10.1’s own device or other hardware and plug-ins.

The waveshaping features are really the cool part:

Unique waveshaping controls to generate custom wavetables
Sine waveshape phase shift and curve shape controls
Additive style synthesis via choice of twenty four sine waveshape harmonics for both positive and negative phase angles
Saw waveshape curve sharpen and partial controls
Pulse waveshape width, phase shift, curve smooth and curve sharpen controls
Triangle waveshape phase shift, curve smooth and curve sharpen controls
Random waveshape quantization, curve smooth and thinning controls

Wave Weld isn’t really intended as a synth, but one advantage of it being an M4L device is, you can easily preview sounds as you work.

More information on the developer’s site – http://metafunction.co.uk/wave-weld/

The download is free with a sign-up for their mailing list.

They’ve got a bunch of walkthrough videos to get you started, too:

Major kudos to Phelan Kane of Meta Function for this release. (Phelan is an Ableton Certified Trainer as well as a specialist in Reaktor and Maschine on the Native Instruments side, as well as London chairman for AES.)

I’m also interested in other ways to go about this – SuperCollider code, anyone?

Wavetable on!

The post Generate wavetables for free, for Ableton Live 10.1 and other synths appeared first on CDM Create Digital Music.

Sonic Faction Valentines Sale: Save 30% on Max for Live devices & Kontakt instruments

Sonic Faction Valentine 30

Plugin Boutique has announced a Sonic Faction Valentines Sale, offering a 30% discount on its inspiring and creative range of Max for Live devices and Kontakt instruments. The sale includes popular products such as Tricky Traps, Whoosh Machine and Dope Matrix: Mod Squad for Max for Live, the Archetype Kontakt Bundle and the Futurism hybrid […]

The post Sonic Faction Valentines Sale: Save 30% on Max for Live devices & Kontakt instruments appeared first on rekkerd.org.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Build your own signal flow with Chaos Culture’s new Max for Live toolbox

Chaos Culture Signal

Isotonik Studios has announced the release of Signal by Chaos Culture, a set of Max for Live building blocks that allows you to build your own signal flow that can be used to create audio, control voltage or just modulate anything in Live. With Signal, Chaos Culture has created a system that lets you quickly […]

The post Build your own signal flow with Chaos Culture’s new Max for Live toolbox appeared first on rekkerd.org.

Audiomodern Holiday Sale: Save up to 60% off Max for Live devices & Riffer plugin

PIB Audiomodern Holiday Sale

Plugin Boutique has launched an exclusive sale on Audiomodern, offering discounts of up to 60% off regular on its Random Chords, Groove and Riff Generator Max For Live devices and Riffer MIDI plugin. Random Chords Generator PRO is an instrument built to create midi chords from the MIDI notes received. It comes with 52 chord […]

The post Audiomodern Holiday Sale: Save up to 60% off Max for Live devices & Riffer plugin appeared first on rekkerd.org.

More surprise in your sequences, with ESQ for Ableton Live

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.

K-Devices releases ESQ for Ableton Live, Next-Gen Patterns Beatstation

K-Devices ESQ UI and window

K-Devices has announced a new Max For Live MIDI tool designed for the generation and advanced manipulation of patterns and beats. A new addition to the K-Devices Out of the Grid (OOG) series, ESQ is based on several sound synthesis techniques, adapted to a standard step sequencer to deliver incredible new flexibility. Each application in […]

The post K-Devices releases ESQ for Ableton Live, Next-Gen Patterns Beatstation appeared first on rekkerd.org.

Sonic Bloom intros ConChord chord step sequencer for Max for Live

Max for Cats ConChord

Sonic Bloom has announced ConChord by Max for Cats, a pulse driven chord step sequencer for Ableton Live. The device features one finger transposition and allows you to create much more musical and intricate patterns on the spot. The MIDI effect offers 8 steps, which are triggered by pulses and each step can have a […]

The post Sonic Bloom intros ConChord chord step sequencer for Max for Live appeared first on rekkerd.org.

Ableton Live Looping gets its own custom controller

A crowd-funded custom controller has just arrived on the scene, designed to assist live triggering and looping in Ableton Live. And there’s already a free download for Max for Live to get you started, even without the hardware.

Hardware like Ableton’s Push lets you play Live with your fingers – but what about your feet? (Ableton Sole?) And what about looping? Pierre-Antoine Grison, Ableton Certified Trainer and producer/musician signed to Ed Banger Records, has come up with his own solution – just in time to show it this weekend at Ableton’s aptly-titled Loop “summit for music makers.” “State Of The Loop” is a custom MIDI controller for Ableton Live’s built-in Looper device.

The Looper in Ableton Live has been around for a few versions, after loads of requests from users. It delivered the kind of looping workflows you’d expect form a looping pedal. But that doesn’t mean everyone knows how to use it, or use it effectively. There are some nice resources online, including:

Ableton Looper Cheat Sheet (Free Download) [Beat Lab Academy]

Ableton Live Devices – How To Use Live Looper [Loopmasters.com articles]

and a ton of tips here:
http://looping.me.uk/category/ableton/

The stomp-style hardware controls not only the Looper device itself but also scenes. So it works for both controlling entire sets and for pedal-style looping, and you can use multiple (software) loopers so you can layer using different on-screen devices.

Features:

Display and control the state of Live’s Looper
Unlimited number of loopers !
2 Expression Pedal inputs with “dynamic mapping”
Scenes Mode to launch Scenes and display their color and name
Sturdy metal case
100% Made in France
USB or MIDI connection for longer distances (up to 15m/50ft)
USB powered
Very light on the CPU
Easy configuration
Weight : 1.7 kg / 3 lb
WxLxH : 30 x 13 x 6 cm / 12 x 5 x 2.5 inches

There’s even a free download that adds some features Ableton Live forgot – the equivalent of follow actions for scenes, plus a heads-up display so you can see what’s happening without hunching over your computer screen. (Seriously, Ableton, those belong as standard features in Live!)

You can use that download as long as you have a compatible version of Live and Max for Live; no hardware needed.

http://kblivesolutions.com/wp-content/uploads/2018/11/Scene-Launcher.zip

Dig this custom version too:

Pricing starts at 240EUR for an “early bird” price, 260EUR after that. (There’s also a 350EUR limited edition still available as I write this).

Project info on Kickstarter:

http://kck.st/2SH5gJE

The post Ableton Live Looping gets its own custom controller appeared first on CDM Create Digital Music.