FL Studio 20.7 adds MIDI Scripting, new music video tools, in latest free update

FL Studio just never slows down. The latest free update offers new MIDI scripting features opening up more hands-on controller support – and new powers to make your own music videos, among other add-ons. FL is funny, in that it just does so much. There are tiny little toys that wind up proving to be […]

The post FL Studio 20.7 adds MIDI Scripting, new music video tools, in latest free update appeared first on CDM Create Digital Music.

Here’s ten hours of infinite fractals and falling Shepard’s Tones

Yesterday was one of the stranger 24 hours in the 15+ year history of producing this site, as you may have heard. So here is a palette cleanser, then, in case you have a … wine hangover.

Please be careful if you’re prone to epilepsy.

Oh, and welcome, new CDM readers! If you’re confused and wonder if it’s always like this here, I am confused, too, and … yes. It pretty much is. There’s a mailing list, if you want this to continue.

If ten hours isn’t enough of this sound for some reason, there’s also an online generator with binaural output so you can really trip out (wear headphones).

There are some terrific background notes already on the video, which can lead you into a nice research audiovisual linkhole to match your synesthesia trip:

Video: Used with permission from the animated fractal’s creator, Vladimir Bulatov. Check out his YouTube page here: http://bit.ly/13jdZ7e , and his DeviantArt page here: http://bit.ly/Yx2S78 .

The video is a fractal version of M.C. Escher’s “Circle Limit III” (http://bit.ly/bbJ9P) created by Bulatov.

Audio: Shepard’s Tone (Shepard’s Scale) consisting of rising tones set octaves apart, similar to how to barber’s pole always seems to be rising: http://bit.ly/tlSj Interestingly, the Batman’s BatPod in “The Dark Knight” uses a Shepard Tone effect to make the motorcycle to seem to have an infinitely rising tone: http://bit.ly/Wfa8WS In classical music, the Shepard’s Scale is used in pieces like Bach’s “Canon Per Tonos” (endlessly rising canon), to have the piece seem to end an octave higher than it began while ending on the same note: http://bit.ly/YTWtzC Many have said that they experience a falling sensation or a feeling of imbalance when they watch this combination for a long time. It was reported by Reddit user “Berkel” that by playing The Shepard tone near a sleeping friend, the friend had a visceral falling dream and woke up very scared. http://bit.ly/XVwfi8 .

We would not recommend trying this! We take no responsibility for visceral dreams, dizziness, fatigue, sweaty palms, seizures, lack of friends, or any other side effects from watching this video.

And yeah, that’s a Shepard Tone, not to be confused with a Shepherd’s Tone, which is presumably what happens when you attempt to tend to your sheep while also catching up on your anthology of French experimental electronic music.

Here’s some more reading on that:

If you want to really be a snob CDM style, despite what you may have heard from our ‘haters’, clearly you need to nitpick the difference between a Shepard Tone and a Shepard-Risset glissando. (Hey, where’d everyone go?! Fine. More wine for me.)

But these kinds of risers and sounds are actually easy to produce – even for free – and can be used in a wide variety of contexts to produce extra suspense and a feeling of constant falling or rising.

Here you go!

It works really well in the free software SuperCollider, which you can run on almost any computer and any OS – no massive CEO-style budget required:

And there are lots of other ways to go about this, too – including some tutorials new to me (and you can even sample sound sources):

And yeah, it’s in sound design, too:

Risset rhythms are even crazier. I’m still waiting for someone to invent a new music genre based on this. (Chicago, I’d ask you, though by now you really have given us enough.)

If you do create a new Risset rhythm-based musical style, let us know about it. My shoes are laced up and ready to dance to it.

The post Here’s ten hours of infinite fractals and falling Shepard’s Tones appeared first on CDM Create Digital Music.

Amazon’s AWS DeepComposer is peak not-not-knowing-what-AI-is-for

AI can be cool. AI can be strange. AI can be promising, or frightening. Here’s AI at totally uncool and not frightening at all – bundled with a crappy MIDI keyboard, for … some … reason.

Okay, so TL:DR – Amazon published some kinda me-too algorithms for music generation that were what we’ve seen for years from Google, Sony, Microsoft, and hundreds of data scientists, bundled a crap MIDI keyboard for $99, and it’s the future! AI! I mean, it definitely doesn’t just sound like a 90s General MIDI keyboard with some bad MIDI patterns.” “The machine has the power of literally all of music composition ever. Now anyone can make musiER:Jfds;kjsfj l; jks

Oops, sorry, I might have briefly started banging my head against my computer keyboard. I’m back.

This is worth talking about because machine learning does have potential – and this neither represents that potential nor accurately represents what machine learning even is.

Game changer.

If at this point you’re unsure what AI is, how you should feel about it, or even if you should care – don’t worry, you’re seriously not alone. “AI” is now largely shorthand for “machine learning.” And that, in turn, now most often refers to a very specific set of techniques currently in vogue that can analyze data and generate predictions by deriving patterns from that data, and not by using rules. That’s a big deal in music, because traditionally both computer models and even paper models of theory have used rules more than they have a probability. You can think of AI in music as related to a dice role – a very, very well-informed, data-driven, weighted dice role – and less like a theory manual or a robotic composer or whatever people have in mind.

Wait a minute – that doesn’t sound like AI at all. Ah, yes. About that.

So, what I’ve just described counts as AI to data scientists, even though it isn’t really related very much to AI in science fiction and popular understanding. The problem is, clarifying that distinction is hard, whereas exploiting that misunderstanding is lucrative. Misrepresenting it makes the tech sound more advanced than arguably it really is, which could be useful if you’re in the business of selling that tech. Ruh-roh.

With that in mind, what Amazon just did is either very dangerous or – weirdly, actually, very useful, because it’s such total, obvious bulls*** that it hopefully makes clear to even laypeople that what they claim they’re doing isn’t what they’re demonstrating. So we get post-curtain-reveal Oz – here, in the form of Amazon AI chief Dr. Matt Wood, pulling off a bad clone of Steve Jobs (even black-and-denim, of course).

Dr. Matt Wood does really have a doctorate in bioinformatics, says LinkedIn. He knows his stuff. That makes this even more maddening.

Let’s imagine his original research, which was predicting protein structures. You know what most of us wouldn’t do? Presumably, we wouldn’t stand in front of a packed auditorium and pretend to understand protein structures, if we aren’t a microbiologist. And we certainly wouldn’t go on to claim predicting protein structures meant we could create life, and also, we’re God now.

But that is essentially what this is, with music – and it is exceedingly weird, from the moment Amazon’s VP of AI is introduced by… I want to say a voiceover by a cowboy?

Summary of his talk: AI can navigate moon rovers and fix teeth. So therefore, it should replace composers – right? (I can do long division in my head. Ergo, next I will try time travel.) We need a product, so give us a hundred bucks, and we’ll give you a developer kit that has a MIDI keyboard and that’s the future of music. We’ll also claim this is an industry first, because we bundled a MIDI keyboard.

At 7 minutes, 57 seconds, Dr. Wood murders Beethoven’s ghost, followed by at 8:30 by sort of bad machine learning example augmented with GarageBand visuals and some floating particles that I guess are the neural net “thinking”?

Then you get Jonathan Coulton (why, JoCo, why?) attempting to sing over something that sounds like a stuck-MIDI-note Band-in-a-Box that just crashed.

Even by AI tech demo standards, it’s this:

Deeper question: I’m not totally certain what has earned us in music the expectation from the rest of society that, not only is what we do already not worth paying for, but everyone should be able to do it, without expending any effort. I don’t have this expectation of neuroscience or basketball, for instance.

But this isn’t even about that. This doesn’t even hold up to student AI examples from three years ago.

It’s “the world’s first” because they give you a MIDI keyboard. But great news – we can beat them. The AWS DeepComposer isn’t shipping yet, so you can actually be the world’s first right now – just grab a USB cable, a MIDI keyboard, connect to one of a half-dozen tools that do the same thing, and you’re done. I’ll give you an extra five minutes to map the MIDI keys.

Or just skip the AI, plug in a MIDI keyboard, and let your cat walk over it.

Translating the specs then:

  1. A s***ty MIDI keyboard with some buttons on it, and no “AI.”
  2. Some machine learning software, with pre-trained generative models for “rock, pop, jazz, and classical.” (aka, and saying this as a white person with a musicology background, “white, white, black-but-white people version, really old white.”)
  3. “Share your creations by publishing your tracks to SoundCloud in just a few clicks from the AWS DeepComposer console.”*

Technically *1 has been available in some form since the mid-80s and *3 is true of any music software connected to the Internet, but … *2, AI! (Please, please say I’m wrong and there’s custom silicon in there for training. Something. Anything to make this make any sense at all.)

I would love to hear I’m wrong and there’s some specialized machine learning silicon embedded in the keyboard but… uh, guessing that’s a no.

Watch the trainwreck now, soon to join the annals of “terrible ideas in tech” history with Microsoft Bob and Google Glass:

https://aws.amazon.com/deepcomposer/

By the way, don’t forget that AWS is being actively targeted right now by the music community with a boycott. Maybe they were hoping for a Springtime for Hitler-style turn-around, like if this is bad enough, we’d love them again? Dunno.

Anyway, if you do want to try this “AI” stuff out – and it can really be interesting – here is a far more comprehensive and musically interesting set of tools from rival Google:

https://magenta.tensorflow.org

Now back to our regularly scheduled programming of anything but this.

AI: I am the button.

The post Amazon’s AWS DeepComposer is peak not-not-knowing-what-AI-is-for appeared first on CDM Create Digital Music.

Op-ed: KORG has transformed synthesizers by letting them run plug-ins, says Sinevibes

No new ideas in synthesizers? Not so, says independent developer Artemiy Pavlov. He was excited enough about KORG’s direction that he’s written about why he thinks it changes music tech for the better.

The Ukraine-based coder who releases under his Sinevibes brand is someone we’ve followed on CDM for some years, as a source of very elegant Mac-only plug-ins. Making those tools for one company’s piece of hardware (one that isn’t Apple) is a new direction. But that’s what he’s done with KORG’s ‘logue plug-in architecture, which now runs on the minilogue xd and prologue keyboards, as well as the $100 NTS-1 kit. As long as you’ve got the hardware, you can run oscillators, filters, and effects from third-party developers like Sinevibes – or even grab the SDK and make your own, if you’re a coder.

Now, of course Artemiy is biased – but that’s kind of the point. What’s biased his one-man dev operation is that he’s clearly had a really great experience developing for KORG’s synths, from coding and testing, to turning it into a business.

This is not a KORG advertisement, even if it sounds like one. I actually didn’t even tell them it’s coming, apart from mentioning something was inbound to KORG’s Etienne Noreau-Hebert, chief analog engineer. But because it impacts both interested musicians and developers, I thought it was worth getting Artemiy’s perspective directly.

So here’s Artemiy on that – and I think this does offer some hope to those wanting new directions for electronic musical instruments. This is labeled “Op Ed” for a reason – I don’t necessarily agree with all of it – but I think it’s a unique perspective, not only in regards to KORG hardware but the potential for the industry and musicians of this sort of embedded development, generally. -Ed.

Artemiy diagrams the idea here.

In early 2018, for its 50th anniversary, Korg introduced the prologue. It wasn’t just a great-sounding synthesizer with shiny, polished-metal looks. It introduced a whole new technical paradigm that has brought a tectonic shift to the whole music hardware and software industry.

Korg has since taken the concept of “plug-ins in mainstream hardware synths” further to much more compact and affordable minilogue xd and Nu:Tekt NTS-1, proving that it’s more serious about this than even I myself thought.

If you thought the platform just lets you load custom wavetables and store effect presets, you have no idea how much you’ve been missing! This is also for those who have been waiting for something that really looks to the future – and for anyone wanting to scale down their rig while scaling up their sonic palette. For me, as a control freak, I can now imagine new features from the moment I touched the synth – even though it’s someone else’s product. 

Here are five ways Korg’s plugin-capable synths completely change the game for all of us, described both as before and after:

Artemiy explains this as a meme.

1. Personalization

Before. When you buy a synthesizer, all the features inside are what the manufacturer decided it should have. Each customer gets the exact same thing – same features, same sound.

After. With Korg’s hardware plugin architecture, the “custom” is finally back to “customer” – as you can configure the oscillator and the effect engines to your liking, and make your instrument unique. Fill it with the exact plugins you want, make it tailored to your own style. You have 48 plugin slots available, and chances are nobody else on the planet configures them the way you do.

2. Versatility

Before. While we do have digital and analog instruments with very capable synthesis and processing engines, to really get into more unusual or experimental sonic territory, you almost certainly need extra outboard gear – often a lot of it, which means more to transport and wire up.

After. The plug-ins now allow you to expand the stock generation and processing capabilities way beyond the “traditional” stuff, and have a whole powerhouse inside a single instrument. Just by switching from preset to preset, you can have the synthesizer dramatically shift its character, much as if you were switching from one hardware setup to another. Much less gear to carry, less things to go wrong, literally zero setup time.

Here’s what I mean, just with currently-available plug-ins. How about a sound-on-sound looper, or a self-randomizing audio repeater, right inside your synth?

And how about running unorthodox digital synthesis methods, in parallel with a purely analog subtractive one?

3. Independence

Before. With almost all gear, you are completely at the mercy of the manufacturer regarding what’s available for your instrument (aside from sound packs which still obviously can only use the stock features).

After. Not only you decide which engines your synth has, cherry-picking sound generation and processing plugins from independent developers, but you can also grab the SDK and build whatever you want yourself. [Ed. See below for some notes on just how easy that is.]

4. Longevity

Before. While some manufacturers might update their instruments with some major features from time to time, to be brutally honest, most won’t. Typically, just a couple years after initial release, you can consider the feature set in your synthesizer frozen… forever.

After. At any time in the future, you can erase some or even all the plug-ins on your synth and install different ones. So it can stay fresh and interesting for years or even decades, without you having to buy new hardware to get a new sound. The scale of your capabilities will actually only keep increasing as the selection of third-party plugins continues growing.

For example, say you have two different live projects. A single instrument can now represent two entirely different sets of sounds, using plugins and presets. In just a couple of minutes you can fully clean your Korg and reload it with a whole new “sonic personality” – no installers to run, no activation hassle, just transfer and go.

5. Range

Before. High-end features almost always command high-end prices, or a high level of coding experience to be able to work with that open-source firmware (in the rare cases when it’s actually available).

After. The ticket price for entry into this world of user-configurable synthesizers is Korg’s tiny and super-affordable monophonic Nu:Tekt NTS-1 (around $100), and it still has 48 plug-in slots just like its bigger brothers. Speaking of the bigger brothers, at the other end of the range we have the flagship 8- or 16-voice polyphonic prologue ($1500-2000), and 4-voice minilogue xd in both keyboard and desktop versions ($600-650). There’s now a plug-in-capable synth for everyone.

Which KORG do you want?

So, which one to choose? Each of the models has its unique advantages and unique ways it can integrate into your existing setup – or create a totally new one. [Ed. I’ve confirmed previously with KORG that all three of these models is equally capable of running this plug-in architecture. There’s also the fourth developer board that Artemiy doesn’t mention, though at this point you’re likely to get the NTS-1.]

NTS-1 is probably the most quirky of the lot, but is also surprisingly versatile for its tiny size. First, it can be easily powered off any portable battery, and second, it has a stereo input that lets you run any external audio through up to three different plugin effects, silently making it “the stompbox of your dreams.” 

The mid-range minilogue xd doesn’t have an external input, but does have a very compact and portable body, and a note sequencer. The sequencer can be used together with the arpeggiator for extra-long evolving melodies, but also has 4 parameter automation tracks – with all this data stored per each preset.

The key feature of range-topping prologue, aside from its incredibly pleasant-to-play keybed and sleek all-metal controls, is the fact that each of its presets can be constructed out of two completely separate, split, or layered patches – meaning that you can load two oscillator plugins at the same time.

Developers, developers, developers

How easy is it to develop your own Korg plugin?

First of all, I can tell you that running my own algorithms on a hardware synth was something I have dreamed of for years. Apart from a very unlikely collaboration with the manufacturer, or digging deep into someone’s rare open-source firmware, I figured the chances of actually doing that were zero.

Luckily, Korg has made it so much easier for me and you, that you would almost be guilty for not giving coding your own little plug-in a go. Allow me to give you a first-person example of what it took to get started.

Korg’s loque-SDK is a collection of source code files and a toolchain that runs via the command line in the terminal app. For each type of plug-in, Korg provides a sample – there’s a simple sine oscillator, there’s a delay, a filter, etc. – and the best way for you to start is modify one of them slightly.

You don’t need to do much. For example, make the sine oscillator produce a mix of two sines, one running an octave above the other. You’d simply multiply the second sin() function’s argument by 2 and add it to the first one — that’s it. That’s exactly what I did, and I was hooked instantly.

Now you build the plugin using the “make” command, and install the file onto whichever of the synthesizers in the family you have. You do that via its “sound librarian” companion app into which you simply drag and drop your plugin while the synth is connected via USB. 

https://github.com/korginc/logue-sdk

Now go

All this said, I hope this has changed how you look at Korg’s plugin-capable synthesizer architecture. Because, and I am really confident when I say this, Korg did go and change the whole industry with it.

https://www.sinevibes.com

https://www.sinevibes.com/korg/

minilogue + logue SDK

prologue + logue SDK

The post Op-ed: KORG has transformed synthesizers by letting them run plug-ins, says Sinevibes appeared first on CDM Create Digital Music.

KORG NTS-1 is here: A pocket ‘logue voice as $99 DIY kit

KORG is kicking off a new product line – the DIY-focused Nu:Tekt – with a $99 screw-together instrument. And it has the same programmable guts as you find in the prologue and minilogue xd, complete with SDK.

The Nu:Tekt NTS-1 is funny to describe, in that it represents different things to different people. For the very few of you who are actually audio programmers, it’s something special … but it might also be of interest if you just want an inexpensive sound toy or particularly like operating a screwdriver. Let’s break it down.

If you just really love using screwdrivers: Yes, this is a kit. There’s no soldering involved, if you need the smell of hot solder flux more than the calming grip of a Philips head.

But if you do enjoy a bit of assembly, you do get the NTS-1 in pieces you screw together. If you love screwdrivers but also have … misplaced all of them (I feel you), there’s even one in the box.

If you want an amazing pocket instrument for $99: Holy crap. The NTS-1 is very possibly the most synthesizer per dollar I’ve seen. KORG actually don’t even really describe how powerful this is in the press release, sheepishly saying it’s “inspired by the MULTI engine” on the prologue and minilogue xd.

So, you have something that’s small and has some onboard jamming features, like a KORG volca, but with the audio depth of their flagship instruments. And it’s even cheaper than a volca – even if you’re the one doing some of the final assembly, and the case and fit and finish are a bit more ‘rustic.’

It actually is the guts of the ‘logue voice. See the developer section below; the NTS-1 retains compatibility with the prologue and minilogue xd.

So that means you get the single oscillator from the ‘logues, plus a multimode filter, a single envelope generator, three (!) LFOs, and three (!) effects processors – reverb, delay, modulation.

You can play that, volca style, using an onboard arpeggiator. Or you can connect MIDI input. Or there’s an audio input, too, making this a very handy pocket-sized effects units for other gear.

For those of us who love collecting little sound boxes, like the Pocket Operators, volcas, Twisted Electrons, and our own MeeBlip, I can see the NTS-1 doing double-duty as an effects box and extra sound source. Life is getting pretty darned good for us – you can literally put together a full studio of gear for the price of one high-end Eurorack module, you know.

That’s already worth a hundred bucks, but the really interesting bit is that the NTS-1 is supported by the ‘logue SDK. This means you’ll be able to load custom effects and oscillators onto it, almost app style.

There are 16 custom user slots for loading your own oscillators, plus 16 slots for custom modulation effects, 8 reverb effects slots, and 8 delay effects slots.

That’ll be fun even if you aren’t a developer. As a non-coder, you probably don’t want to mess around with GitHub and the SDK, but KORG is planning a librarian and custom content page you’ll be able to use on the Web. which will eventually be here:

https://www.korg.com/products/synthesizers/nts_1/librarian_contents.php

And if you are a developer, well –

If you’re a developer: This just solved two problems for you in getting into KORG’s SDK for the ‘logues. First, it makes your price of entry way cheaper. (And even developers I know who own the keyboards are considering buying this, too, because it looks like fun.)

Second, if the NTS-1 takes off, the installed base of people who can make use of your creations also expanded.

The SDK here supports both custom oscillators and custom effects, as with the full-fledged keyboards. Check out the dedicated SDK page:

https://www.korg.com/us/products/dj/nts_1/sdk.php

Full details:

  • Ribbon keyboard
  • 1 digital oscillator, 1 multimode filter, 1 EG, 3 LFOs
  • Multiple effects: Mod (chorus, ensemble, phaser, flanger), delay, reverb
  • Minijack audio in and out
  • Minijack MIDI in jack
  • USB port (definitely necessary for loading custom programs, but I think also supports USB MIDI – I’ll check)
  • Runs on USB bus power (< 500 mA)
  • 129 mm x 78 mm x 39 mm / 5.08” x 3.07” x 1.54”
  • 124 g / 4.37 oz
  • USB cable, manual, and screwdriver in the box

The NTS-1 ships in November. I’ll definitely try to get one. US$99.

https://www.korg.com/us/products/dj/nts_1/index.php

The post KORG NTS-1 is here: A pocket ‘logue voice as $99 DIY kit appeared first on CDM Create Digital Music.

Grainstation-C is a free granular tool with ambisonics, and an album to match

It started as an artist tool, but it could become yours, as well. Grainstation-C is a free and open source sound creation workstation that’s playable live and supports ambisonic spatial sound. And the music its creators makes is ethereal and wonderful.

Micah Frank, noted sound designer and toolmaker as well as composer/musician, produced Grainstation-C for his own work but has expanded it to an open source offering for everybody. I’ve been waiting for this one for a while, and I think it could appeal both to people looking for a unique tool as well as those wanting to learn a bit more about granular sound in Csound.

https://github.com/chronopolis5k/Grainstation-C [link + full installation instructions, etc.]

http://csound.com/download.html [requisite Csound install]

The engine: 4 streams from disk, 3 streams from live input. Live audio looping, multiple grain controls, six independent pitch delay lines, six switchable low- and high-pass filters. Snapshot saving.

Powered by: Csound, the modern free and open source sound creation tool that evolved from the grandparent of all digital audio tools.

Live control: It’s pre-mapped to the eminently useful Novation LaunchControl XL MK2, but you could easily remap it to other MIDI controllers if you prefer.

Ambisonics: This optional spatial audio processing lets you use a standard format to adapt to immersive sound environments – in three-dee! Or not, as you like.

It’s deep stuff – even with different granular modes and controls (time stretching, frame animation, pitch shifting). The inspiration, says Micah, was the now-discontinued System Concrète, a complete MakeNoise modular rig that combined grains with modulation, filtering, and delays. But – as is easily possible with software, unconstrained by knobs and space and money – he kept going from there.

Equally notable is the ethereal, beautiful album Quetico that also debuts this week, on Micah’s own Puremagnetik record label. Once, the line between toolmakers and musicians, engineers and composers was thought sacred – even with elaborate explanations about why the two couldn’t be compared. But just as electronic artists have demolished other sacred walls (club and concert, for instance), Micah is part of a generation doing away with those old prejudices.

And the results are richly sensual – warm waves of sound processed from Yellowstone geysers and Big Sur nights, Micah says. It’s classic ambient music, and the tool simply melts away, the essential craft of delivering a palette of sound. At the same time, being transparent with the tools is the ultimate confidence in one’s own musical invention. Micah’s Puremagnetik was a business built in making sounds for others, and yet both the album and free tool suggest the limitless possibility of that act of sharing.

In any event, this is acousmatic creation of the finest quality, with or without the GitHub link. And Micah is getting some deserved recognition, too, with a 2019 New York Foundation award for the Arts Fellow in Music and Sound.

With so much of the sound out of my country of origin the United States ugly, it’s wonderful to hear beautiful algorithmic sounds derived from the nation’s national parks instead.

https://micahfrank.bandcamp.com/album/quetico

Image credit: “Yellowstone 8/07”by stevetulk is licensed under CC BY 2.0

The post Grainstation-C is a free granular tool with ambisonics, and an album to match appeared first on CDM Create Digital Music.

KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers

KORG has already shown that opening up oscillators and effects to developers can expand their minilogue and prologue keyboards. But now they’re doing the same for the nutekt NTS-1 – a cute little volca-ish kit for synths and effects. Build it, make wild sounds, and … run future stuff on it, too.

Okay, first – even before you get to any of that, the NTS-1 is stupidly cool. It’s a little DIY kit you can snap together without any soldering. And it’s got a fun analog/digital architecture with oscillators, filter, envelope, arpeggiator, and effects.

Basically, if you imagine having a palm-sized, battery-powered synthesis studio, this is that.

Japan has already had access to the Nutekt brand from KORG, a DIY kit line. (Yeah, the rest of the world gets to be jealous of Japan again.) This is the first – and hopefully not the last – time KORG has opened up that brand name to the international scene.

And the NTS-1 is one we’re all going to want to get our hands on, I’ll bet. It’s full of features:

– 4 fixed oscillators (saw, triangle and square, loosely modeled around their analog counterpart in minilogue/prologue, and VPM, a simplified version of the multi-engine VPM oscillator)
– Multimode analog modeled filter with 2/4 pole modes (LP, BP, HP)
– Analog modeled amp. EG with ADSR (fixed DS), AHR, AR and looping AR
– modulation, delay and reverb effects on par with minilogue xd/prologue (subset of)
– arpeggiator with various modes: up, down, up-down, down-up, converge, diverge, conv-div, div-conv, random, stochastic (volca modular style). Chord selection: octaves, major triad, suspended triad, augmented triad, minor triad, diminished triad (since sensor only allows one note at a time). Pattern length: 1-24
– Also: pitch/Shape LFO, Cutoff sweeps, tremollo
– MIDI IN via 2.5mm adapter, USB-MIDI, SYNC in/out
– Audio input with multiple routing options and trim
– Internal speaker and headphone out

That would be fun enough, and we could stop here. But the NTS-1 is also built on the same developer board for the KORG minilogue and prologue keyboards. That SDK opens up developers’ powers to make their own oscillators, effects, and other ideas for KORG hardware. And it’s a big deal the cute little NTS-1 is now part of that picture, not just the (very nice) larger keyboards. I’d see it this way:

NTS-1 buyers can get access to the same custom effects and synths as if they bought the minilogue or prologue.

minilogue and prologue owners get another toy they can use – all three of them supporting new stuff.

Developers can use this inexpensive kit to start developing, and don’t have to buy a prologue or minilogue. (Hey, we’ve got to earn some cash first so we can go buy the other keyboard! Oh yeah I guess I have also rent and food and things to think about, too.)

And maybe most of all –

Developers have an even bigger market for the stuff they create.

This is still a prototype, so we’ll have to wait, and no definite details on pricing and availability.

Waiting.

Yep, still waiting.

Wow, I really want this thing, actually. Hope this wait isn’t long.

I’m in touch with KORG and the analog team’s extraordinary Etienne about the project, so stay tuned. For an understanding of the dev board itself (back when it was much less fun – just a board and no case or fun features):

KORG are about to unveil their DIY Prologue boards for synth hacking

Videos:

Sounds and stuff –

Interviews and demos –

And if you wondered what the Japanese kits are like – here you go:

Oh, and I’ll also say – the dev platform is working. Sinevibes‘ Artemiy Pavlov was on-hand to show off the amazing stuff he’s doing with oscillators for the KORG ‘logues. They sound the business, covering a rich range of wavetable and modeling goodness – and quickly made me want a ‘logue, which of course is the whole point. But he seems happy with this as a business, which demonstrates that we really are entering new eras of collaboration and creativity in hardware instruments. And that’s great. Artemiy, since I had almost zero time this month, I better come just hang out in Ukraine for extended nerd time minus distractions.

Artemiy is happily making sounds as colorful as that jacket. Check sinevibes.com.

The post KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.