Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

Get 50% off Hexachords Orb Composer Artist and Pro music software

Hexachords Orb Composer S

Plugin Boutique is offering an exclusive 50% discount on the Orb Composer Artist and Pro artificial intelligence music composition software designed for composers, bands, orchestrators or simply everyone fond of music. Orb Composer is a creative tool which you can model very precisely to assist you during your music composition work sessions. Orb is the […]

The post Get 50% off Hexachords Orb Composer Artist and Pro music software appeared first on rekkerd.org.

Flock Audio launches PATCH System digitally-controlled patch bay routing system

Flock Audio PATCH

Flock Audio has announced availability of its innovative PATCH System, an most advanced digitally-controlled, 100%-analog patch bay routing system. The system combines (PATCH APP) software and 64-point connection-providing (PATCH) hardware, allowing anyone to easily control analog audio routings without having to resort to the use of manual patch cables. “I started working on a conceptual […]

The post Flock Audio launches PATCH System digitally-controlled patch bay routing system appeared first on rekkerd.org.

Free Downgrade turns Ableton Live into lo-fi wobbly vaporwave tape

Fidelity? High-quality sound? No – degradation! And if you don’t have a ragged VHS deck or cassette Walkman handy, these free effects racks in Ableton Live will sort you out.

Downgrade is the work of Tom Cosm, long-time Ableton guru. There are five effects:

Fluffer
Corrupt
Hiss
Morph
Flutter

— plus if you give him literally US$1 or more (you cheapskate), you get an additional Stutter rack.

Basically, you get loads of controls for manipulating downsampling, tape effects, saturation, distortion, modulation of various kinds, echo, vocoder, and more. It’s a sort of retro Vaporwave starter kit if you’d like to think of it that way – or an easy, dial-up greatest hits of everything Ableton Live can now do to make your sound worse. And by worse, I mean better, naturally.

Ableton have been gradually adding all these digital downsampling features (early on) and simulated analog tape and saturation effects and nonlinear modulation (more recently). Tom has neatly packed them into one very useful set of Racks.

Notice I say “Racks,” not Max for Live devices. That means these will mostly run on different editions of Live, and they’re a bit easier to pick apart and adjust/modify – without requiring Max knowledge.

Go download them:

https://gumroad.com/l/wmIbJ

The post Free Downgrade turns Ableton Live into lo-fi wobbly vaporwave tape appeared first on CDM Create Digital Music.

Save 25% on AutoTheory 4 Music Composition Tool by Mozaic Beats

Autotheory 25 OFF PluginBoutique

Plugin Boutique has announced an exclusive 25% discount on the AutoTheory 4 music composition tool by Mozaic Beats that offers industry standard MIDI effects in a synchronized environment. AutoTheory 4 takes industry standard midi effects and synchronizes them into the most expansive and easy to use composition software available. Our patented improvements upon traditional Scale, […]

The post Save 25% on AutoTheory 4 Music Composition Tool by Mozaic Beats appeared first on rekkerd.org.

Magix releases Sound Forge Pro 13 audio editing software

Magix Sound Forge Pro 13

Magix has announced the release of Sound Forge Pro 13, the latest version of its software for recording, audio editing and mastering. Version 13 offers an improved user experience, with more efficiency, stability and speed. For over 25 years, SOUND FORGE Pro has set the benchmark for recording, editing and processing audio. The latest version […]

The post Magix releases Sound Forge Pro 13 audio editing software appeared first on rekkerd.org.

Reason 10.3 delivers on VST performance promises

We’ve been waiting, but now the waiting is done. Propellerhead has added the VST performance boost it had promised to Reason users – meaning plug-ins now benefit from Reason 10’s patchable racks.

I actually flew to Stockholm, Sweden back in the dark days of December to talk to the engineers about this update (among some other topics). Short version of that story: yeah, it took them longer than they’d hoped to get VST plug-ins operating as efficiently as native devices in the rack.

If you want all those nitty-gritty details, here’s the full story:

Reason 10.3 will improve VST performance – here’s how

But now, suffice to say that the main reason for the hold-up – Reason’s patchable, modular virtual rack of gear – just became an asset rather than a liability. Now that VSTs in Reason perform roughly as they will in other DAWs, what Reason adds is the ability to route those plug-ins however you like, in Reason’s unique interface.

Combine that with Reason’s existing native effects and instruments and third-party Rack Extensions, and I think Reason becomes more interesting as both a live performance rig and a DAW for recording and arranging than before. It could also be interesting to stick a modular inside the modular – as with VCV Rack or this week’s Blocks Base and Blocks Prime from Native Instruments.

Anyway, that’s really all there is to say about 10.3 – it’s what Propellerhead call a “vitamin injection” (which, seeing those dark Swedish winters, I’m guessing all of them need about now.

This also means the engineers have gotten over a very serious and time-consuming hurdle and can presumably get onto other things. It’s also a development for the company that they’ve been upfront in talking about a flaw both before, during, and concluding development – and that’s welcome from any music software maker. So props to the Props – now go get some sunshine; you’ve earned it. (and the rest of us can tote these rigs out into the park, too)

Reason: what’s new

The post Reason 10.3 delivers on VST performance promises appeared first on CDM Create Digital Music.

Sonarworks announces additional headphone profiles for Reference 4

Sonarworks Reference 4 new headphone profiles

Sonarworks has just announced the latest additions to its list of supported headphone models for Reference 4, its award-winning sound calibration software. The new models include offerings from major brands like AKG, Focusrite, Pioneer, and Sennheiser, many of which have been added due to user feedback and bring the total count of supported headphone models […]

The post Sonarworks announces additional headphone profiles for Reference 4 appeared first on rekkerd.org.

Max TV: go inside Max 8’s wonders with these videos

Max 8 – and by extension the latest Max for Live – offers some serious powers to build your own sonic and visual stuff. So let’s tune in some videos to learn more.

The major revolution in Max 8 – and a reason to look again at Max even if you’ve lapsed for some years – is really MC. It’s “multichannel,” so it has significance in things like multichannel speaker arrays and spatial audio. But even that doesn’t do it justice. By transforming the architecture of how Max treats multiple, well, things, you get a freedom in sketching new sonic and instrumental ideas that’s unprecedented in almost any environment. (SuperCollider’s bus and instance system is capable of some feats, for example, but it isn’t as broad or intuitive as this.)

The best way to have a look at that is via a video from Ableton Loop, where the creators of the tech talk through how it works and why it’s significant.

Description [via C74’s blog]:

In this presentation, Cycling ’74’s CEO and founder David Zicarelli and Content Specialist Tom Hall introduce us to MC – a new multi-channel audio programming system in Max 8.

MC unlocks immense sonic complexity with simple patching. David and Tom demonstrate techniques for generating rich and interesting soundscapes that they discovered during MC’s development. The video presentation touches on the psychoacoustics behind our recognition of multiple sources in an audio stream, and demonstrates how to use these insights in both musical and sound design work.

The patches aren’t all ready for download (hmm, some cleanup work being done?), but watch this space.

If that’s got you in the learning mood, there are now a number of great video tutorials up for Max 8 to get you started. (That said, I also recommend the newly expanded documentation in Max 8 for more at-your-own-pace learning, though this is nice for some feature highlights.)

dude837 has an aptly-titled “delicious” tutorial series covering both musical and visual techniques – and the dude abides, skipping directly to the coolest sound stuff and best eye candy.

Yes to all of these:

There’s a more step-by-step set of tutorials by dearjohnreed (including the basics of installation, so really hand-holding from step one):

For developers, the best thing about Max 8 is likely the new Node features. And this means the possibility of wiring musical inventions into the Internet as well as applying some JavaScript and Node.js chops to anything else you want to build. Our friends at C74 have the hook-up on that:

Suffice to say that also could mean some interesting creations running inside Ableton Live.

It’s not a tutorial, but on the visual side, Vizzie is also a major breakthrough in the software:

That’s a lot of looking at screens, so let’s close out with some musical inspiration – and a reminder of why doing this learning can pay off later. Here’s Second Woman, favorite of mine, at LA’s excellent Bl__K Noise series:

The post Max TV: go inside Max 8’s wonders with these videos appeared first on CDM Create Digital Music.

Arturia’s 3 Compressors get creative, producer-friendly

Arturia have followed up their hit “3 Filters…” and “3 Preamps You’ll Actually Use” with the inevitable trio of compressors – but as with the other bundles, there are some twists (and lower intro prices on now).

Before they expanded into doing their own MIDI controllers and synth hardware, Arturia rose to prominence on their modeling chops. And they have tended to spin those modeling engines and competency in recreating vintage gear into spin-off products. The trick with the “3 [things] You’ll Actually Use” series has been rising above the crowd of vintage remakes now available to music producers. So that’s been about doing two things: one, picking three blockbusters to reproduce, and two, adding some functionality extras that lets producers get creative with the results.

And that’s to me what has made the series interesting – while lots of vendors will sell you reproductions of classic studio equipment, these have been ones you might well use in the production process. It’s not only about perfecting a recording or mix, but also about integrating into your creative process as you’re developing ideas.

The compressors trio go that route, too – so in addition to using these routinely in mixing or mastering, you can also use them for some inventive sidechain or saturation.

The three compressors getting the Arturia treatment – and the circuitry inside:

UREI 1176 [FET transistor]
DBX 165A [solid-state VCA]
Gates STA-Level [tube]

The 1176 is pretty ubiquitously desired at this point – and of course among other recreations you can keep it in the family of creator Bill Putnam Sr. and try Universal Audio’s own creation. It’s something you can use for subtle tonal shifts even at lower levels, in addition to cranking up compression if you want. So why add another reproduction to the pile? Arturia has added a “link” button for automatic volume leveling if you want – giving you the 1176 sound but more modern behavior on demand.

And you can use the 1176 as a sidechain. Oh – wait, that’s really huge. And there’s a creative “Time Warp” feature with pre-delay. So thanks to the fact that Arturia aren’t being quite as precious with the historical design as some of their rivals, you can choose either an “authentic” 1176 recreation, or something that’s 1176-ish but does things that were impossible on the original analog hardware.

It’s surprising enough for the 1176 to be new again, but the other models here have some similar ideas.

Next, you’ve got the “Over Easy” 165A, an essential compressor in a lot of studios, which has both some nice dirty, gritty timbral character of its own and punchy processing plus Mid/Side processing. For this model, Arturia have introduced a whole new panel of additional controls that fold out when you want them in the UI.

Don’t be fooled by the skeumorphic knobs; the original DBX didn’t have these options. That also includes their “Time Warp” pre-delay, convenient side-chaining (here with an easy “manual mode” trigger so you can preview what it’ll sound like), and now an integrated EQ. That EQ is modeled on SSL-style channels, so it’s a bit like having a pre-configured mixer rig to use with your sidechaining.

The STA-Level is maybe the most interesting of the three as far as rarity. It also gets (optional) modernization, with an input-output link for automatic leveling, a parallel compression mode that’s integrated with the software (plus an easy “mix” knob for adjusting how much parallel compression you want to hear), and sidechaining.

All in all, it’s an intriguing approach. On one hand, you get panels that look and operate and sound more like the original than a lot of software models at the low end of the price range. (For instance, the compressors added recently to Logic Pro X, while free, are more loose impressions than authentic recreations.)

But on the other, and here’s where Arturia clearly has an edge, you get new sidechaining and auto-leveling and other features that make these more fun to use in modern contexts and easier to drop into your creative flow.

Sidechaining these kinds of compressor models alone I think is a win; the convenience of the UIs and the fact that these are native on any platform to me makes them invaluable – maybe even compared to the existing filter and preamp offerings.

I’ve been playing around with them a bit already; I’m especially curious if I can run a couple in a live context – will report back on that. But I’m already impressed on sound and functionality.

Everything is on sale, so if you own existing Arturia stuff, you could get these for as little as $/EUR 49 (or half off if you’re new to the series), or in discounted bundles. You can also buy the compressors individually, if there’s one that really catches your fancy.

Plus, there are some new tutorials to get you started:

https://www.arturia.com/products/software-effects/comps-bundle/resources#tutorials

Honestly, just one wish – this is such a useful bundle of effects in the nine Arturia has built, I’d love to see it on Linux. It might be the only bundle you really need.

3 Compressors You’ll Actually Need [Arturia]

The post Arturia’s 3 Compressors get creative, producer-friendly appeared first on CDM Create Digital Music.