In many, many languages, the word for “playing” music is the same as “playing” a game. So it’s fitting KORG has invaded the Nintendo Switch console with music-making – and that you can share with friends.
The translation of KORG Gadget to Nintendo’s Switch handheld is mostly novelty and fun convenience – you’re probably still going to find the iPad version easier to use solo. But where the Switch stands out is some of its multiplayer, collaborative twists. Since on key feature of the Switch (though not Switch Lite) is TV output, you can jam on a large screen or projection image. It’s the old gaming split-screen mode, like in Mario Kart and (back in the day) Goldeneye. Combine that with the “this is just for fun” feeling you get from holding a game console, and you get something you probably wouldn’t get quite so easily with other platforms. This is literally something you might bust out at a party.
The team at online tool Splice decided to give the mode a workout, and produced a video and short blog piece sharing their experiences:
Now, of course, instruments, bands, choirs – all of these provide the same social experience. And none of those things is going away, either, judging by the ongoing market for sheet music, acoustic instruments, accessories, education, and conferences in those fields (really, look it up). So maybe it’s not about production replacing traditional music. Maybe it’s more that we have this new form of musical activity – electronic production – and so far, we haven’t had a good way to share it.
Ever tried to work with a friend in something like Ableton Live? You can easily jam together by adding extra synth gear or drum machines. But using the actual tool Live often means “fighting” over the controls, because both the mouse/keyboard interface and things like Ableton Push tend to assume a single user. (Push will even regularly override other controllers and inputs, but I digress – this isn’t just a Live problem, but a limitation with the computer/user metaphor generally.)
So it seems like a small thing, but even this crude setup shows how you might think about this differently.
More from KORG on Gadget as used for educational purposes, and demonstrating its multiplayer features. (By the way, I was consulted, via New York’s Dubspot, with Rockstar Games on how to make a handheld gaming platform work in music education. The idea has been floating around – but today’s Switch is far better as a choice than the then-current Sony PSP Rockstar was using – sorry, Sony.)
English subtitled, go further into that classroom. I will just assume that in Japan it’s normal for all music teachers to wear lab coats.
Oh, and – another thing. Gaming in general offers an alternative paradigm for how we think about widespread access to music creation, and difficulty level. Not to harp endlessly on Amazon this week, but part of why I was triggered by their keynote was how tired the “everyone can make music without any skill or effort” refrain was.
Gaming has had to tackle this perception, too. But consistently, actual gamers ask for experiences that last. That might be a so-called “casual” game that still sucks up time and ramps up difficulty, or it might be punishing “hard-core” games. But one thing gamers have generally resisted is games that play themselves – which is why the “AI makes music for you” model is so screwed up. (The exception perhaps proves the rule – some mobile games now leverage the data on your usage to essentially squeeze money out of you, leaving the user doing little. Most everyone hates this, and even Apple and Google have had to intervene by changing the underlying business model.)
So back to multiplayer music – Korg GADGET doesn’t take out any of the fundamental work of music production in any other tool. What’s fun about it is making mistakes, screwing up together with other people. And even though theoretically someday this could work online, you can also see in the video that there’s something invaluable about being in the same room together with friends.
I personally think as music production does reach further and further around the world, it’s less and less likely you’ll need to connect online just to find someone else. But of course online multiplayer is there, too, when you want it – still with the large-scale visual feedback of splitscreen. It’s also not hard to imagine that soon the Twitch video streaming phenomenon will grow bigger in music, with some early first indications of crossover already.
Just look for installed base. The iPad is the assumed go-to for this sort of idea, and has its own jam-friendly Ableton Link protocol for just this use case. But iOS has limitations of its own, and it’s clear there are some different ideas possible even where you wouldn’t expect it, on Nintendo Switch.
I think there’s a lesson here for being creative with computing platforms, or even offering devices with video out – people do still own TVs and projectors.
Alternatively, print out this story, stick it in a file folder with your taxes, and tell your accountant that yes, you do need to deduct the cost of a Nintendo Switch. You’ve got just a few shopping days left until the end of the year if you want that to get taken off your tax bill for 2018.
You’re welcome. (Oh, you might want to redact this last bit. Guten Morgen, Finanzamt!)
It’s got a $60 license for nearly everyone, you can evaluate it for free, and now Reaper – yet again – adds a ton of well-implemented power features. Reaper 6 is the newest edition of this exceptionally capable DAW.
New in this release:
Use effects plug-ins right from the tracks/mixer view. So, some DAWs already have something like a little EQ that you can see in the channel strip visually, or maybe a simple compressor. Reaper has gone further, with small versions of the UI for a bunch of popular plug-ins you can embed wherever you want. That means less jumping in and out of windows while you patch.
You get EQ, filtering, compressor, and more. (ReaEQ, ReaFIR, ReaXcomp, graphical JSFX, etc.)
Powerful routing/patching. The Routing Diagram feature gives you an overview of how audio signal is routed throughout the environment, which makes sends and effects and busing and sidechaining and so on visual. It’s like having a graphical patchbay for audio right inside the DAW. (Or it’s like the ghost of the Logic Pro Environment came back and this time, average people actually wanted to use it. )
Auto-stretch audio. Now, various DAWs have attempted this – you want sound to automatically stretch and conform as you adjust tempo or make complex tempo changes. That’s useful for film scoring, for creative purposes, and just because, well, you want things to work that way. Now Reaper’s developers say they’ve made it easy to do this with tempo-mapped and live-recorded materials (Auto-stretch Timebase). This is one we’ll have to test.
Make real envelopes for MIDI. You can draw continuous shapes for your MIDI control adjustments, complete with curve adjustment. That’s a bit like what you get in Ableton Live’s clip envelopes, as well as other DAWs. But it’s a welcome addition to Reaper, which increasingly starts to share the depth of other older DAWs, without the same UI complexity (cough).
It works with high-density displays on Mac and PC. That’s Retina on Mac and the awkwardly-named HiDPI on PC. But the basic idea is, you can natively scale the default theme to 100%, 150%, and 250% on new high-def displays without squinting. Speaking of which
There’s a new tweakable theme. The new theme is set up to be customizable with Tweaker script.
Big projects and displays work better. The developers say they’ve “vastly” optimized 200+ track-count projects. On the Mac, you also get faster screen drawing with support for Apple’s Metal API. (Yeah, everyone griped about that being Mac-only and proprietary, but it seems savvy developers are just writing for it and liking it. I’m honestly unsure what the exact performance implications are of doing the same thing on Windows, though on the other hand I’m happy with how Reaper performs everywhere.)
And more. ” Dynamic Split improvements; import and render media with embedded transient information; per-track positive or negative playback offset; faster and higher quality samplerate conversion; and many other fixes and improvements.”
Honestly, I’m already won over by some of these changes, and I had been shifting conventional DAW editing work to Reaper as it was. (That is, sure, Ableton Live and Bitwig Studio and Reason and whatever else are fun for production, but sometimes you want a single DAW for editing and mixdown that is none of those others.)
Where Reaper stands out is its extraordinary budget price and its no-nonsense, dead-simple UI – when you really don’t want the DAW to be too creative, because you want to get to work. It does that, but still has the depth of functionality and customization that means you feel you’re unlikely to outgrow it. That’s not a knock on other excellent DAW choices, but those developers should seriously consider Reaper as real competition. Ask some users out there, and you’ll hear this name a lot.
Now if they just finish that “experimental” native Linux build, they’ll really win some nerd hearts.
AI can be cool. AI can be strange. AI can be promising, or frightening. Here’s AI at totally uncool and not frightening at all – bundled with a crappy MIDI keyboard, for … some … reason.
Okay, so TL:DR – Amazon published some kinda me-too algorithms for music generation that were what we’ve seen for years from Google, Sony, Microsoft, and hundreds of data scientists, bundled a crap MIDI keyboard for $99, and it’s the future! AI! I mean, it definitely doesn’t just sound like a 90s General MIDI keyboard with some bad MIDI patterns.” “The machine has the power of literally all of music composition ever. Now anyone can make musiER:Jfds;kjsfj l; jks
Oops, sorry, I might have briefly started banging my head against my computer keyboard. I’m back.
This is worth talking about because machine learning does have potential – and this neither represents that potential nor accurately represents what machine learning even is.
If at this point you’re unsure what AI is, how you should feel about it, or even if you should care – don’t worry, you’re seriously not alone. “AI” is now largely shorthand for “machine learning.” And that, in turn, now most often refers to a very specific set of techniques currently in vogue that can analyze data and generate predictions by deriving patterns from that data, and not by using rules. That’s a big deal in music, because traditionally both computer models and even paper models of theory have used rules more than they have a probability. You can think of AI in music as related to a dice role – a very, very well-informed, data-driven, weighted dice role – and less like a theory manual or a robotic composer or whatever people have in mind.
Wait a minute – that doesn’t sound like AI at all. Ah, yes. About that.
So, what I’ve just described counts as AI to data scientists, even though it isn’t really related very much to AI in science fiction and popular understanding. The problem is, clarifying that distinction is hard, whereas exploiting that misunderstanding is lucrative. Misrepresenting it makes the tech sound more advanced than arguably it really is, which could be useful if you’re in the business of selling that tech. Ruh-roh.
With that in mind, what Amazon just did is either very dangerous or – weirdly, actually, very useful, because it’s such total, obvious bulls*** that it hopefully makes clear to even laypeople that what they claim they’re doing isn’t what they’re demonstrating. So we get post-curtain-reveal Oz – here, in the form of Amazon AI chief Dr. Matt Wood, pulling off a bad clone of Steve Jobs (even black-and-denim, of course).
Dr. Matt Wood does really have a doctorate in bioinformatics, says LinkedIn. He knows his stuff. That makes this even more maddening.
Let’s imagine his original research, which was predicting protein structures. You know what most of us wouldn’t do? Presumably, we wouldn’t stand in front of a packed auditorium and pretend to understand protein structures, if we aren’t a microbiologist. And we certainly wouldn’t go on to claim predicting protein structures meant we could create life, and also, we’re God now.
But that is essentially what this is, with music – and it is exceedingly weird, from the moment Amazon’s VP of AI is introduced by… I want to say a voiceover by a cowboy?
Summary of his talk: AI can navigate moon rovers and fix teeth. So therefore, it should replace composers – right? (I can do long division in my head. Ergo, next I will try time travel.) We need a product, so give us a hundred bucks, and we’ll give you a developer kit that has a MIDI keyboard and that’s the future of music. We’ll also claim this is an industry first, because we bundled a MIDI keyboard.
At 7 minutes, 57 seconds, Dr. Wood murders Beethoven’s ghost, followed by at 8:30 by sort of bad machine learning example augmented with GarageBand visuals and some floating particles that I guess are the neural net “thinking”?
Then you get Jonathan Coulton (why, JoCo, why?) attempting to sing over something that sounds like a stuck-MIDI-note Band-in-a-Box that just crashed.
Even by AI tech demo standards, it’s this:
Deeper question: I’m not totally certain what has earned us in music the expectation from the rest of society that, not only is what we do already not worth paying for, but everyone should be able to do it, without expending any effort. I don’t have this expectation of neuroscience or basketball, for instance.
But this isn’t even about that. This doesn’t even hold up to student AI examples from three years ago.
It’s “the world’s first” because they give you a MIDI keyboard. But great news – we can beat them. The AWS DeepComposer isn’t shipping yet, so you can actually be the world’s first right now – just grab a USB cable, a MIDI keyboard, connect to one of a half-dozen tools that do the same thing, and you’re done. I’ll give you an extra five minutes to map the MIDI keys.
Or just skip the AI, plug in a MIDI keyboard, and let your cat walk over it.
Translating the specs then:
A s***ty MIDI keyboard with some buttons on it, and no “AI.”
Some machine learning software, with pre-trained generative models for “rock, pop, jazz, and classical.” (aka, and saying this as a white person with a musicology background, “white, white, black-but-white people version, really old white.”)
“Share your creations by publishing your tracks to SoundCloud in just a few clicks from the AWS DeepComposer console.”*
Technically *1 has been available in some form since the mid-80s and *3 is true of any music software connected to the Internet, but … *2, AI! (Please, please say I’m wrong and there’s custom silicon in there for training. Something. Anything to make this make any sense at all.)
I would love to hear I’m wrong and there’s some specialized machine learning silicon embedded in the keyboard but… uh, guessing that’s a no.
Watch the trainwreck now, soon to join the annals of “terrible ideas in tech” history with Microsoft Bob and Google Glass:
There are some powerful sound creation possibilities lurking beneath Live’s built-in devices. Finding inspiration from Live effects is the topic of my second collaboration with Riemann Kollektion.
In the first part of this tutorial series, I told you how to finish tracks faster using some of the latest shortcuts in Ableton Live 10.1. This time, I’ve down a round-up of some of tricks and tips with Echo, Delay, Convolution, and other devices:
Listen to the Earth – no, literally. Australia has a world’s-first project to map the voice of its wildlife, which will make a “galaxy of sounds” that can be heard like the ecosystem’s heartbeat.
The galaxy metaphor isn’t just poetic. Scientists already observe the cosmos through radio signal, optical imaging, and other techniques. A network of hundreds of solar-powered audio recorders will do the same across Australia.
Those sensors themselves will be familiar to electronic composers and field recordists. They use microphones and standard SD cards, but operate entirely on solar power – a necessity for remote locations. The recordings are continuous, and spread through the vast country, across some one hundred sites and 400 sensors covering “desert, grassland, shrublands and temperate, subtropical and tropical forests.”
From a scientific standpoint, this is already a big deal, because it can look at geography and time comprehensively, instead of in little pieces that can rob recordings of context and meaning. That means seasonal change, or even changes over the five year mission, will appear in the recordings.
It’s also significant that the project, and the vast amounts of sound data it produces, will be available to the public. That hints at both educational opportunities and crowd-sourced analysis – especially since citizen Australians may be on the ground spread through these hundred locations. The whole project will keep its data online and accessible. It’s a welcome change in talking about big data, from government surveillance and corporate control to actually harnessing that power to provide access to the public and better awareness of the ecosystem on which we depend.
Audio recording is less intrusive and far more comprehensive (in time and space) than conventional methods. And there are real, practical possibilities, as ABC News in Australia reports from Queensland:
[Biology Professor Lin Schwarzkopf, James Cook University] had also mapped the noise of the aggressive Indian myna birds and the rare black-throated finch, which environmentalists warned was a species under threat from the Adani coal mine in central Queensland.
“It’s useful because we can then look at things like species decline so we can understand when they are disappearing, like the black-throated finch. Over time we can listen for their calls and see if they are still there,” she said.
In the case of the Australian project, there are various partners including universities, parks and government, owners, indigenous partners, and advocacy groups and NGOs.
This strikes me as an opportunity for composers and musicians to partner with scientists, too. Whereas music once made primitive mimicry of bird songs, now it seems musicians possess a technological and creative/cultural skillset to both capture more data and express its significance to the broader public and society. So let’s get on it, shall we? (Hey, Europe for instance is a lot smaller and more manageable, for example…)
Overwhelmed with music toys? We can get you a bass synth that sounds like no other – plus a way to connect all your gear for $10. Happy Black Friday to you.
geode: $129 synth hardware, ships free
First, there’s our very own MeeBlip geode hardware synth. It’s just US$129, and this week only, we’ll ship it to nearly the whole world for free. There’s not another bass synth that sounds like it, in 2019 or 1979 or any other year. Plug-and-play USB connects MIDI and power with one connection – even to a smartphone (with adapter). The sound itself features biting oscillators and a crunchy analog filter.
So if you’re looking for a little sonic inspiration and a unique bass sound for yourself – or as a gift – now’s the time to grab this. Sale lasts this week only.
It wouldn’t be Black Friday deal without a ridiculous offer for a limited time, so we’ve gone one more for you. Our thru5 MIDI splitter kit is easy to assemble, and splits one MIDI input to five outputs, so you can connect all your gear.
And for this week only, it’s yours for just US$9.99 – a perfect Secret Santa gift or stocking stuffer or Hanukkah present for someone you know who’s handy with a soldering iron or wants to learn. Or it’s a way to get your studio in order for a winter cleaning.
Shipping if you’re just buying thru5 is $3.95 to USA, $5.95 international, varies in Canada. (Free with orders above $99- so, for instance, you could add in a geode!)
Behringer will remake the rare Wasp and now just announced a 4-voice paraphonic Moog clone. Having trouble keeping track? Here’s a recap.
With so many remakes now shipping or teased, the hard part may be just keeping track what Behringer are doing, what’s available already, and what’s coming. Let’s step back and just review what products are currently available or inbound. Competitors should ignore this list at their own peril.
This is what Tom Whitwell at the former Music thing (then a blog, now a modular brand) called the Behringer “photocopier.” But whereas that until recently was a disparaging remark, fans of the brand now eagerly follow these cut-rate remakes. So while I say “clone” rather than the company’s preferred “authentic reproduction,” there’s no doubt that the intention is to do these as remakes.
What’s remarkable is how many of these synths came from 1979 alone, or within a year or two.
Oh yeah, also – despite the company’s claims, while this hardware is far cheaper than most of the used equivalent originals, there are often inexpensive alternatives of new or similar instruments. So let’s get into both what Behringer is offering, and whether you might consider other options before spending your hard-earned scratch:
Based on: Roland TR-808 (1980, and not, sadly, the Soviet rocket RD-8)
List/street: $524.99 / $400
New features: What Behringer added here mostly was in the sequencer, which like the other remakes has some more advanced features. There’s also a sort of envelope follower called the Wave Designer.
The competition: Arturia’s DrumBrute Impact is actually cheaper, at around $300. It’s got fewer outs, but an advanced sequencer and a distinctive sound. Roland’s TR-8S or even a used buy on a TR-8 give you faders and additional effects (plus on the TR-8S, the ability to load your own samples), and could still be a worthy upgrade – with effective TR modeled sounds. Roland’s Boutique TR-08 on the other hand looks comparatively lean versus the Behringer, and it’d be nice to see the company that made the original 808 respond with something more competitive at the entry level. You can have my TR-8S when you pry it out of my cold, d– actually, my hot, sweaty, fader tweaking fingers.
New features: The arpeggiator is the main addition here, plus a distortion switch, but basically this is a bare-bones 303 clone – at an insanely cheap price.
The competition: There are loads of 303 remakes out there, analog and digital. Roland for their part has the Boutique with a very useful delay and semi-useful distortion (theirs with an actual knob, not just a switch). But to my knowledge, the only real competition at this price is a software plug-in. Or get a KORG volca series synth for a different sound; even the volca bass is somewhat refreshingly not a 303. Oh yeah, or think about a two-oscillator bass synth, but I’m biased. Yes, of all of these – here is the one where Behringer can be expected to totally own a category, maybe to the point of us winding up with way too many acid tracks in about a year.
New features: As with the others, this is mostly about squeezing this in the Eurorack chassis, but there’s also a new overdrive circuit. Just remember, like the original, you have to give the analog circuits time to warm up.
The competition: You can get a surprising number of capable synths these days for $300 or even less, but a Minimoog remake from Moog will certainly be a luxury item. That said, if you’re willing to spend a little more, you can get something like the Moog Sub Phatty for around $500 on a Black Friday sale – with keyboard. It’s not a Minimoog model D, but it also moves into some new sonic territory, and you get the feeling of owning an actual Moog. That’s not to sneeze at the Model D – this thing has made a big impact, and maybe its biggest competition comes from Behringer itself, with the also inexpensive (and far more patchable/open-ended) Neutron.
New features: Digital multi-effects are the main edition here, plus the arp/sequencer and onboard storage and digital I/O on the others. The Odyssey is also a keyboard synth, unlike the other Eurorack things in this list —
Competition: – but it goes up against KORG’s Arp Odyssey reissue, made in collaboration with the synth’s original creators (if at a higher cost).
New features: These are the most sensible additions of the bunch. Behringer chose to add the thing the 101 owners modded themselves; namely, FM and waveforms (plus MIDI, of course). Actually, that begs the question of why Behringer didn’t add the 303 sound mods to that remake, but – hey, maybe someone else also wants to remake the 303 now, too? I’m sure 303 remakes will never die.
The competition: Behringer on this one did what Roland didn’t do – make a remake of the 101 with full-sized keys and a standard handle option for keytar-style playing. Roland chose to go with the tiny Boutique format. On the other hand, the SH-01A is a four-voice instrument, which winds up being really useful in conjunction with its triggers and sequencer. Now if Roland would just offer that in a full-sized keytar option, it seems like they’d have a hit. I might still buy the SH-01A for something small and four voice, though. (Don’t send letters. I know I like small things and digital synths more than some of our … readers. Ahem.)
New features: Just the usual I/O additions – and they cut off an octave on the keyboard to save space.
The competition: For once, Behringer is the more expensive option, believe it or not (apart from an astronomically pricey original VP-330). The Roland Boutique VP-03 is a solid unit at a fraction of this price – it seems new hardware stock is mostly gone, but they fetch around $300 used (without the keyboard, so around $400 with). Sure, it’s digital, but the sound is good; mainly it’s down to whether you want to save some money. Roland’s JD-Xi is also a vocoder for $500 and is a far more flexible and powerful synthesizer, though the design has all the charm of something it looks like you’d find on sale in a Guitar Center on the Death Star. (Dunno, maybe Kylo Ren had this keyboard in the emo rock band he played on the side while studying the Dark Side.)
If all you want is a vocoder, even the Roland VT-4 box is an option for only a couple hundred bucks; its predecessor the VT-3 you might be able to get used, like, for free. Now, maybe an analog recreation is more serious but… well, I leave it to you to decide how serious you want to get about a vocoder/string synth unitasker. But yes, Behringer are the only ones with something really like the original.
For that reason, this is kind of the most rational of a lot of the choices here. But it’s a vocoder/string synth, so “rational” depends on whether that’s something you need.
New features: This is what happens if you take the Model D, put it in a new case with a keyboard, and make it 4-voice analog paraphonic (there’s just one filter) instead of monophonic. And it probably tips Behringer’s hand as far as what you should expect from the other models here – they’ll gradually translate the Eurorack-case monophonic models to 4-voice models with keyboards, since they’ve already plucked the low-hanging fruit of what people most recognize in vintage brands.
As with past Behringer outings, though, they’re teasing some time before they’re shipping. The risk: people might defer purchase of their products. The more likely outcome: people might defer purchase of competing products.
New features: Arp, sequencer, Eurorack chassis – you’re seeing the pattern. There’s also a “drone mode” switch.
The competition: It’s a bit painful when Behringer takes on small, independent makers like Dave Smith and Tom Oberheim. The new Sequential (formerly Dave Smith Instruments) has lots of beautiful new designs. But if you’re on a $350 budget, you can get the wonderful Evolver or Tetra, for example, unique and original Dave Smith-designed sound modules. Or save up your money for a more advanced polysynth based on the ideas behind the Prophet series. Or there’s the Pioneer AS-1, which is a single-voice version of the Prophet-6. I think these come closer to the 21st century vision Dave’s got, and they’re worth supporting for that reason. Oh yeah, and you can buy them now, rather than waiting on the PRO-1.
Behringer WASP DELUXE
Based on: Electronic Dream Plant (EDP) Wasp (1978)
Preorder price: $300
Shipping date: unknown
New features: Behringer went with a desktop sound module and didn’t reproduce the membrane keyboard edition of the first Wasp. This otherwise mostly looks like that hardware, though. It does MIDI and USB now, like the other stuff here.
The competition: There’s not really another Wasp remake that I know of, or anything that close. On the other hand, the Arturia MicroFreak is also a digital-analog hybrid, does way more in sounds, and comes with an innovative keyboard, so to me it’s a better (and more forward-thinking) use of your $300. There’s also the feature-packed Behringer Crave for $50 less than this, available now, so it’s hard to imagine preordering unless you’re really a die-hard fan of the original.
If you’re KORG, Roland, Polivoks, Black Corporation (maybe their CS-80 Deckard’s Dream, maybe Kimiji) … it seems like Behringer is coming for you, or wants to. See previously:
Also, now they want to get into VST plug-ins?
(Comments on that last one are hilarious.)
Fear of clones?
It’s clear at this point that the Music Tribe (Behringer) is leveraging both analog circuitry and some chips that allow it to inexpensively reproduce popular models. And they’ve built a bigger team to do engineering alongside their own manufacturing operation in China. I think they’ve even taken to using the word “clone” in passing on social media.
Because these are now clones of fairly ancient products, this product strategy in itself wouldn’t make the company controversial. Rather, it’s Behringer’s aggressive strategy in regards to competitors, press, PR, and intellectual property that have made it divisive in the synth business. As I noted last week, that includes the recent move of registering trademarks actively owned by competitors. Not only does KORG definitely own and actively use Mono/Poly, but we’ve confirmed even the Polivoks name is registered and used on an active product.
There’s also a deeper question here, and it’s not just about Behringer. As both analog and digital synthesis has become more affordable, will we use that inexpensive power to make new things, or recreate old ones? So far, Behringer has demonstrated that recognized products like the 303, 808, and Minimoog go more viral in social media than new synths. And so far, companies like Roland and other original brands who made this products haven’t succeeded in stopping Behringer from naming and dressing up their products to look like the vintage products. That opens the door to even other manufacturers easily undercutting historic brands and smaller boutique makers on price.
But it’s unclear, once synth fans have stocked up on well-known items like the 303, whether this cheap remake trend will be sustainable.
The real power of machine learning may have nothing to with automating music making, and everything to do with making sound tools hear the way you do.
There’s a funny opening to the release for Deezer’s open source Spleeter tool:
While not a broadly known topic, the problem of source separation has interested a large community of music signal researchers for a couple of decades now.
Wait a second – sure, you may not call it “source separation,” but anyone who has tried to make remixes, or adapt a song for karaoke sing-alongs, or even just lost the separate tracks to a project has encountered and thought about this problem. You can hear the difference between the bassline and the singer – so why can’t your computer process the sound the way you hear? Splitting stems out of a stereo audio feed also demonstrates that tools like EQ, filters, and multiband compressors are woefully inadequate to the task.
Here’s where so-called “AI” is legitimately exciting from a sound perspective.
It’s unfortunate in a way that people imagine that machine learning’s main role should be getting rid of DJs, music selectors, and eventually composers. And that’s unfortunate not because the technology is good at those things, but precisely because so far it really isn’t – meaning people may decide the thing is overhyped and abandon it completely when it doesn’t live up to those expectations.
But when it comes to this particular technique, neural network machine learning is actually doing some stuff that other digital audio techniques haven’t. It’s boldly going where no DSP has gone before, that is. And it works – not perfectly, but well enough to be legitimately promising. (“It will just keep getting better” is a logical fallacy too stupid for me to argue with. But “we can map out ways in which this is working well now and make concrete plans to improve it with reason to believe those expectations can pan out” – yeah, that I’ll sign up for!)
Spleeter from music streaming service Deezer (remember them?) is a proof of concept – and one you can use right now, even if you’re not a coder. (You’ll just need some basic command line and GitHub proficiency and the like.)
It’s free and open source. You can mess around with this without paying a cent, and even incorporate it into your own work via a very permissive MIT license. (I like free stuff, in that it also encourages me to f*** with stuff in a way that I might not with things I paid for – for whatever reason. I’m not alone here, right?)
It’s fast. With GPU acceleration, like even on my humble Razer PC laptop, you get somewhere on the order of 100x real time processing. This really demonstrations computation in a way that we would see in real products – and it’s fast enough to incorporate into your work without, like, cooking hot waffles and eggs on your computer.
It’s simple. Spleeter is built with Python and TensorFlow, a popular combination for AI research. But what you need to know if you don’t already use those tools is, you can use it from a command line. You can actually learn this faster than some commercial AI-powered plug-ins.
It splits things. I buried the lede – you can take a stereo stream and split it into different audio bits. And –
It could make interesting results even when abused. Sure, this is trained on a particular rock-style instrumentation, meaning it’ll tend to fail when you toss audio material that deviates too far from the training set. But it will fail in ways that produce strange new sound results, meaning it’s ripe for creative misuse.
Friend-of-the-site Rutger Muller made use of this in the AI music lab I participated in and co-facilitated in Tokyo, complete with a performance in Shibuya on Sunday night. (The project was hosted by music festival MUTEK.jp and curated by Maurice Jones and Natalia Fuchs aka United Curators.) He got some really interesting sonic results; you might, too.
Releasing Spleeter: Deezer Research source separation engine
Spleeter remains a more experimental tool and interesting for research. Commercial developers are building tools that use these techniques but develop a more practical workflow for musicians. Check, for instance, Accusonus – and more on what their tools can do for you as well as how they’re working with AI very soon.
No new ideas in synthesizers? Not so, says independent developer Artemiy Pavlov. He was excited enough about KORG’s direction that he’s written about why he thinks it changes music tech for the better.
The Ukraine-based coder who releases under his Sinevibes brand is someone we’ve followed on CDM for some years, as a source of very elegant Mac-only plug-ins. Making those tools for one company’s piece of hardware (one that isn’t Apple) is a new direction.But that’s what he’s done with KORG’s ‘logue plug-in architecture, which now runs on the minilogue xd and prologue keyboards, as well as the $100 NTS-1 kit. As long as you’ve got the hardware, you can run oscillators, filters, and effects from third-party developers like Sinevibes – or even grab the SDK and make your own, if you’re a coder.
Now, of course Artemiy is biased – but that’s kind of the point. What’s biased his one-man dev operation is that he’s clearly had a really great experience developing for KORG’s synths, from coding and testing, to turning it into a business.
This is not a KORG advertisement, even if it sounds like one. I actually didn’t even tell them it’s coming, apart from mentioning something was inbound to KORG’s Etienne Noreau-Hebert, chief analog engineer. But because it impacts both interested musicians and developers, I thought it was worth getting Artemiy’s perspective directly.
So here’s Artemiy on that – and I think this does offer some hope to those wanting new directions for electronic musical instruments. This is labeled “Op Ed” for a reason – I don’t necessarily agree with all of it – but I think it’s a unique perspective, not only in regards to KORG hardware but the potential for the industry and musicians of this sort of embedded development, generally. -Ed.
In early 2018, for its 50th anniversary, Korg introduced the prologue. It wasn’t just a great-sounding synthesizer with shiny, polished-metal looks. It introduced a whole new technical paradigm that has brought a tectonic shift to the whole music hardware and software industry.
Korg has since taken the concept of “plug-ins in mainstream hardware synths” further to much more compact and affordable minilogue xd and Nu:Tekt NTS-1, proving that it’s more serious about this than even I myself thought.
If you thought the platform just lets you load custom wavetables and store effect presets, you have no idea how much you’ve been missing! This is also for those who have been waiting for something that really looks to the future – and for anyone wanting to scale down their rig while scaling up their sonic palette. For me, as a control freak, I can now imagine new features from the moment I touched the synth – even though it’s someone else’s product.
Here are five ways Korg’s plugin-capable synths completely change the game for all of us, described both as before and after:
Before. When you buy a synthesizer, all the features inside are what the manufacturer decided it should have. Each customer gets the exact same thing – same features, same sound.
After. With Korg’s hardware plugin architecture, the “custom” is finally back to “customer” – as you can configure the oscillator and the effect engines to your liking, and make your instrument unique. Fill it with the exact plugins you want, make it tailored to your own style. You have 48 plugin slots available, and chances are nobody else on the planet configures them the way you do.
Before. While we do have digital and analog instruments with very capable synthesis and processing engines, to really get into more unusual or experimental sonic territory, you almost certainly need extra outboard gear – often a lot of it, which means more to transport and wire up.
After. The plug-ins now allow you to expand the stock generation and processing capabilities way beyond the “traditional” stuff, and have a whole powerhouse inside a single instrument. Just by switching from preset to preset, you can have the synthesizer dramatically shift its character, much as if you were switching from one hardware setup to another. Much less gear to carry, less things to go wrong, literally zero setup time.
Here’s what I mean, just with currently-available plug-ins. How about a sound-on-sound looper, or a self-randomizing audio repeater, right inside your synth?
And how about running unorthodox digital synthesis methods, in parallel with a purely analog subtractive one?
Before. With almost all gear, you are completely at the mercy of the manufacturer regarding what’s available for your instrument (aside from sound packs which still obviously can only use the stock features).
After. Not only you decide which engines your synth has, cherry-picking sound generation and processing plugins from independent developers, but you can also grab the SDK and build whatever you want yourself. [Ed. See below for some notes on just how easy that is.]
Before. While some manufacturers might update their instruments with some major features from time to time, to be brutally honest, most won’t. Typically, just a couple years after initial release, you can consider the feature set in your synthesizer frozen… forever.
After. At any time in the future, you can erase some or even all the plug-ins on your synth and install different ones. So it can stay fresh and interesting for years or even decades, without you having to buy new hardware to get a new sound. The scale of your capabilities will actually only keep increasing as the selection of third-party plugins continues growing.
For example, say you have two different live projects. A single instrument can now represent two entirely different sets of sounds, using plugins and presets. In just a couple of minutes you can fully clean your Korg and reload it with a whole new “sonic personality” – no installers to run, no activation hassle, just transfer and go.
Before. High-end features almost always command high-end prices, or a high level of coding experience to be able to work with that open-source firmware (in the rare cases when it’s actually available).
After. The ticket price for entry into this world of user-configurable synthesizers is Korg’s tiny and super-affordable monophonic Nu:Tekt NTS-1 (around $100), and it still has 48 plug-in slots just like its bigger brothers. Speaking of the bigger brothers, at the other end of the range we have the flagship 8- or 16-voice polyphonic prologue ($1500-2000), and 4-voice minilogue xd in both keyboard and desktop versions ($600-650). There’s now a plug-in-capable synth for everyone.
Which KORG do you want?
So, which one to choose? Each of the models has its unique advantages and unique ways it can integrate into your existing setup – or create a totally new one. [Ed. I’ve confirmed previously with KORG that all three of these models is equally capable of running this plug-in architecture. There’s also the fourth developer board that Artemiy doesn’t mention, though at this point you’re likely to get the NTS-1.]
NTS-1 is probably the most quirky of the lot, but is also surprisingly versatile for its tiny size. First, it can be easily powered off any portable battery, and second, it has a stereo input that lets you run any external audio through up to three different plugin effects, silently making it “the stompbox of your dreams.”
The mid-range minilogue xd doesn’t have an external input, but does have a very compact and portable body, and a note sequencer. The sequencer can be used together with the arpeggiator for extra-long evolving melodies, but also has 4 parameter automation tracks – with all this data stored per each preset.
The key feature of range-topping prologue, aside from its incredibly pleasant-to-play keybed and sleek all-metal controls, is the fact that each of its presets can be constructed out of two completely separate, split, or layered patches – meaning that you can load two oscillator plugins at the same time.
Developers, developers, developers
How easy is it to develop your own Korg plugin?
First of all, I can tell you that running my own algorithms on a hardware synth was something I have dreamed of for years. Apart from a very unlikely collaboration with the manufacturer, or digging deep into someone’s rare open-source firmware, I figured the chances of actually doing that were zero.
Luckily, Korg has made it so much easier for me and you, that you would almost be guilty for not giving coding your own little plug-in a go. Allow me to give you a first-person example of what it took to get started.
Korg’s loque-SDK is a collection of source code files and a toolchain that runs via the command line in the terminal app. For each type of plug-in, Korg provides a sample – there’s a simple sine oscillator, there’s a delay, a filter, etc. – and the best way for you to start is modify one of them slightly.
You don’t need to do much. For example, make the sine oscillator produce a mix of two sines, one running an octave above the other. You’d simply multiply the second sin() function’s argument by 2 and add it to the first one — that’s it. That’s exactly what I did, and I was hooked instantly.
Now you build the plugin using the “make” command, and install the file onto whichever of the synthesizers in the family you have. You do that via its “sound librarian” companion app into which you simply drag and drop your plugin while the synth is connected via USB.
All this said, I hope this has changed how you look at Korg’s plugin-capable synthesizer architecture. Because, and I am really confident when I say this, Korg did go and change the whole industry with it.
The dream of a keyboard with expansive expression, not just organ-style key-plunking, now sees a new integrated instrument. And the maker of the Haken Continuum is involved.
Expressive E, who had made single three-dimensional controllers, partnered with Haken Audio in order to make this full keyboard. Each key has three-dimensional control, in a mechanical design they’re calling Augmented Keyboard Action. You can strum, you can add vibrato, more detailed legato, or add layered notes – all this good stuff, minus having to just play keys and twist knobs or turn wheels.
We’ve seen three-dimensional keys before, at least as limited-run (or one-off) inventions. And we’ve seen various touch-style keyboards (like those from ROLI), and pads with multi-axis input (like those from Roger Linn and Polyend).
What we haven’t seen, though, is a mass-produced three-dimensional mechanical keyboard (that is, with individually articulated keys). And we haven’t seen very many instruments with integrated sound engines. The Osmose is both.
Haken’s role here was to contribute their powerful EaganMatrix Sound Engine, which is already designed to be integrated with hardware and is built around three-dimensional expressive control as input. This is the same engine previously found on the Continuum Fingerboard and ContinuuMini. You get physical modeling, additive, subtractive, FM, virtual analog, granular and spectral synthesis models, for various acoustic and electronic sounds, plus loads of presets. (There was a reason I was complaining lately that Roland needs to move its sound engine forward.)
The difference is, if you didn’t much like the undifferentiated ribbon of the Continuum, now you get something with keys on it. So for keyboardists and pianists, you don’t lose the investment of learning to play your instrument – or centuries of music composition.
This custom engineering costs money – the Osmose will be US$/€1799. (Funny, it was only a few years ago when that counted as “mid-range.”) But they’ve found a novel solution to ramping up production. Early bird buyers reserving before December 31, 2019 will get a massive 40% discount, so that it only costs USD/EUR 1079. And you don’t have to put up all of that right away – they’re taking just $299 as deposit. That’s more reasonable than the usual Kickstarter deal.
Specs on this instrument:
49 full-sized keys
MIDI controller, with MPE (MIDI Polyphonic Expression) and MPE+ support