Focusrite went public in 2014, but this week brings its first major acquisition – and it’s a big deal. Monitor maker ADAM from Berlin joins the UK’s Focusrite / Novation / Ampify.
Publicly traded companies and fast growing business empires have a bit of a challenge in the music tech business – music creation is still specialized and places a high standard on quality. So ADAM is at least encouraging as a choice; the boutique maker is highly respected and many studio swear by their monitors.
The ongoing question here is really growth, but of course revenue growth isn’t necessarily limited to downmarket tools with thin margins. ADAM’s strength is building an upmarket, boutique product for music makers. And while the studio in the traditional sense has been in decline, “studio” as in independent producers has potential. Just as Eurorack and boutique synths have proven in the electronic arena, that growing population does have a portion of the market who will pay a premium for perceived quality – just as every market has luxury.
That may seem obvious, but I’ve been surprised that so many conversations about growth in our industry are focused at the low end or beginners. The problem with that commodity end is that competition gets fierce. I find it especially strange, because by contrast, you wouldn’t expect the automotive industry to focus exclusively on cheap cars and first-time buyers. Auto is perfectly comfortable describing engines as things only engineers understand, and marketing that specialization. And they make products for specific, high-end customers (think Formula One racing). And that in turn drives interest across the market, because it strengthens the brand (think Mercedes-Benz Group). But I digress.
Maybe the greater ambition in this acquisition is the talk of the two companies working together. I think it’s fair to be skeptical any time there’s talk of that in acquisitions – the reality is often far tougher. It’s unclear for now what Focusrite Group imagine that collaboration to look like, on what products, or if they’ll be able to deliver. But here’s Focusrite’s CEO Tim Carroll on that topic:
“ADAM Audio is undeniably a leader in the field of electroacoustics. The A7Xs and S3s have become standards in recording spaces across the globe. Even so, I know the team have no interest in resting on their laurels. We need to ensure they receive all the support they require to continue raising the sonic bar. That our two companies are so aligned from a cultural perspective reassures me that, as we increasingly work together, great things will happen. With so much expertise between us in acoustics, sound reproduction, DSP, and control, the opportunities are abundant to refine recording and production workflows together.”
For the time being, ADAM Audio stay in Berlin and keep Christian Hellinger in charge. The 20 year-old company are known for their A7X and S3, and now cover a range of potential markets with the T, AX, and S Series.
I’d actually love to see the kind of collaboration described above – and I’m sure the Focusrite Group engineers would love a trip to Berlin. (Come visit, please!) But while it’s not emphasized in the press release, I imagine the immediate benefit to ADAM will be Focusrite’s international marketing operation, which looks increasingly global with LA and Hong Kong alongside the UK. ADAM Audio already spans the Asian manufacturing world (Dongguan), Berlin’s ongoing dominance in engineering, and then the ever-lucrative US market – Nashville.
And oh yeah – Focusrite is traded on the AIM market, London Stock Exchange. So I imagine some reader of this site just had your stock go up. (No disclosure needed here for me; I would make that statement if I did.)
Pioneer and Beatport this week announced new streaming offerings for DJs. And then lots of people kind of freaked out. Let’s see what’s actually going on, if any of it is useful to DJs and music lovers, and what we should or shouldn’t worry about.
Artists, labels, and DJs are understandably on edge about digital music subscriptions – and thoughtless DJing. Independent music makers tend not to see any useful revenue or fan acquisition from streaming. So the fear is that a move to the kinds of pricing on Spotify, Amazon, and Apple services would be devastating.
And, well – that’s totally right, you obviously should be afraid of those things if you’re making music. Forget even getting rich – if big services take over, just getting heard could become an expensive endeavor, a trend we’ve already begun to see.
So I talked to Beatport to get some clarity on what they’re doing. We’re fortunate now that the person doing artist and label relations for Beatport is Heiko Hoffmann, who has an enormous resume in the trenches of the German electronic underground, including some 17 years under his belt as editor of Groove, which has had about as much a reputation as any German-language rag when it comes to credibility.
Beatport LINK: fifteen bucks a month, but aimed at beginners – 128k only. Use it for previews if you’re a serious Beatport user, recommend it to your friends bugging you about how they should start DJing, and otherwise don’t worry about it.
Beatport CLOUD: five bucks a month, gives you sync for your Beatport collection. Included in the other stuff here and – saves you losing your Beatport purchases and gives you previews. 128k only. Will work with Rekordbox in the fall, but you’ll want to pay extra for extra features (or stick with your existing download approach).
Beatport LINK PRO: the real news – but it’s not here yet. Works with Rekordbox, costs 40-60 bucks, but isn’t entirely unlimited. Won’t destroy music (uh, not saying something else won’t, but this won’t). The first sign of real streaming DJs – but the companies catering to serious DJs aren’t going to give away the farm the way Apple and Spotify have. In fact, if there’s any problem here, it’s that no one will buy this – but that’s Beatport’s problem, not yours (as it should be).
WeDJ streaming is for beginners, not Pioneer pros
This first point is probably the most important. Beatport (and SoundCloud) have each created a subscription offering that works exclusively with Pioneer’s WeDJ mobile DJ tool. That is, neither of these works with Rekordbox – not yet.
Just in case there’s any doubt, Pioneer has literally made the dominant product image photo some people DJing in their kitchen. So there you go: Rekordbox and and CDJ and TORAIZ equals nightclub, WeDJ equals countertop next to a pan of fajitas.
So yeah, SoundCloud streaming is now in a DJ app. And Beatport is offering its catalog of tracks for US$14.99 a month for the beta, which is a pretty phenomenally low price – and one that would rightfully scare labels and artists.
But it’s important this is in WeDJ as far as DJing. Pioneer aren’t planning on endangering their business ecosystem in Rekordbox, higher-end controllers, and standalone hardware like the CDJ. They’re trying to attract the beginners in the hopes that some of those people will expand the high end market down the road.
By the same token, it’d be incredibly short-sighted if Beatport were to give up on customers paying a hundred bucks a month or so on downloads just to chase growth. Instead, Beatport will split its offerings into a consumer/beginner product (LINK for WeDJ) and two products for serious DJs (LINK Pro and Beatport CLOUD).
And there’s reason to believe that what disrupts the consumer/beginner side might not make ripples when it comes to pros – as we’ve been there already. Spotify is in Algoriddim’s djay. It’s actually a really solid product. But the djay user base doesn’t impact what people use in the clubs, where the CDJ (or sometimes Serato or TRAKTOR) reign supreme. So if streaming in DJ software were going to crash the download market, you could argue it would have happened already.
That’s still a precarious situation, so let’s break down the different Beatport options, both to see how they’ll impact music makers’ business – and whether they’re something you might want to use yourself.
Ce n’est pas un CDJ.
Beatport LINK – the beginner one
First, that consumer service – yeah, it’s fifteen bucks a month and includes the Beatport catalog. But it’s quality-limited and works only in the WeDJ app (and with the fairly toy-like new DDJ-200 controller, which I’ll look at separately).
Who’s it for? “The Beginner DJs that are just starting out will have millions of tracks to practice and play with,” says Heiko. “Previously, a lot of this market would have been lost to piracy. The bit rate is 128kbs AAC and is not meant for public performance.”
But us serious Beatport users might want to mess around with it, too – it’s a place you can audition new tracks for a fairly low monthly fee. “It’s like having a record shop in your home,” says Heiko.
Just don’t think Beatport are making this their new subscription offering. If you think fifteen bucks a month for everything Beatport is a terrible business idea, don’t worry – Beatport agree. “This is the first of our Beatport LINK products,” says Heiko. “This is not a ‘Spotify for dance music.’ It’s a streaming service for DJs and makes Beatport’s extensive electronic music catalog available to stream audio into the WeDJ app.” And yeah, Spotify want more money for that, which is good – because you want more money charged for that as a producer or label. But before we get to that, let’s talk about the locker, the other thing available now:
WeDJ – a mobile gateway drug for DJs, or so Pioneer hopes. (NI and Algoriddim did it first; let’s see who does it better.)
Beatport CLOUD – the locker/sync one
Okay, so streaming may be destroying music but … you’ve probably still sometimes wanted to have access to digital downloads you’ve bought without having to worry about hard drive management or drive and laptop failures. And there’s the “locker” concept.
Some folks will remember that Beatport bought the major “locker” service for digital music – when it acquired Pulselocker. [link to our friends at DJ TechTools]
Beatport CLOUD is the sync/locker making a comeback, with €/$ 4.99 a month fee and no obligation or contract. It’s also included free in LINK – so for me, for instance, since I hate promos and like to dig for my own music even as press and DJ, I’m seriously thinking of the fifteen bucks to get full streaming previews, mixing in WeDJ, and CLOUD.
There are some other features here, too:
Re-download anything, unlimited. I heard from a friend – let’s call him Pietro Kerning – that maybe a stupid amount of music he’d (uh, or “she’d”) bought on Beatport was now scattered across a random assortment of hard drives. I would never do such a thing, because I organize everything immaculately in all aspects of my life in a manner becoming a true professional, but now this “friend” will easily be able to grab music anywhere in the event of that last-minute DJ gig.
By the same token you can:
Filter all your existing music in a cloud library. Not that I need to, perfectly organized individual, but you slobs need this, of course.
Needle-drop full previews. Hear 120 seconds from anywhere in a track – for better informed purchases. (Frankly, this makes me calmer as a label owner, even – I would totally rather you hear more of our music.)
There should be some obvious bad news here – this only works with Beatport purchased music. You can’t upload music the way some sync/locker services have worked in the past. But I think given the current legal landscape, if you want that, set up your own backup server.
What I like about this, at least, is that this store isn’t losing stuff you’ve bought from them. I think other download sites should consider something similar. (Bandcamp does a nice job in this respect – and of course it’s the store I use the most when not using Beatport.)
The new Beatport cloud.
Beatport LINK Pro – what’s coming
There are very few cases where someone says, “hey, good news – this will be expensive.” But music right now is a special case. And it’s good news that Beatport is launching a more expensive service.
For labels and artists, it means a serious chance to stay alive. (I mean, even for a label doing a tiny amount of download sales, this can mean that little bit of cash to pay the mastering engineer and the person who did the design for the cover, or to host a showcase in your local club.)
For serious users using that service, it means a higher quality way of getting music than other subscription services – and that you support the people who make the music you love, so they keep using it.
Or, at least, that’s the hope.
What Beatport is offering at the “pro” tiers does more and costs more. Just like Pioneer doesn’t want you to stop buying CDJs just because they have a cheap controller and app, Beatport doesn’t want you to stop spending money for music just because they have a subscription for that controller and app. Heiko explains:
With the upcoming Pioneer rekordbox integration, Beatport will roll out two new plans – Beatport LINK Pro and Beatport LINK Pro+ – with an offline locker and 256kbps AAC audio quality (which is equivalent to 320kbps MP3, but you’re the expert here). This will be club ready, but will be aimed at DJs who take their laptops to clubs, for now. They will cost €39,99/month and €59,99/month depending on how many tracks you can put in the offline locker (50 and 100 respectively).
You’ll get streaming inside Rekordbox with the basic LINK, too – but only at 128k. So it’ll work for previewing and trying out mixes, but the idea is you’ll still pay more for higher quality. (And of course that also still means paying more to work with CDJs, which is also a big deal.)
And yeah, Beatport agree with me. “We think streaming for professional DJ use should be priced higher,” says Heiko. “And we also need to be sure that this is not biting into the indie labels and artists (and therefore also Beatport’s own) revenues,” he says.
What Heiko doesn’t say is that this could increase spending, but I think it actually could. Looking at my own purchase habits and talking to others, a lot of times you look back and spend $100 for a big gig, but then lapse a few months. A subscription fee might actually encourage you to spend more and keep your catalog up to date gig to gig.
It’s also fair to hope this could be good for under-the-radar labels and artists even relative to the status quo. If serious DJs are locked into subscription plans, they might well take a chance on lesser known labels and artists since they’re already paying. I don’t want to be overly optimistic, though – a lot of this will be down to how Beatport handles its editorial offerings and UX on the site as this subscription grows. That means it’s good someone like Heiko is handling relations, though, as I expect he’ll be hearing from us.
Really, one very plausible scenario is that streaming DJing doesn’t catch on initially because it’s more expensive – and people in the DJ world may stick to downloads. A lot of that in turn depends on things like how 5G rolls out worldwide (which right now involves a major battle between the US government and Chinese hardware vendor Huawei, among other things), plus how Pioneer deals with a “Streaming CDJ.”
The point is, you shouldn’t have to worry about any of that. And there’s no rush – smart companies like Beatport will charge sustainable amounts of money for subscriptions and move slowly. The thing to be afraid of is if Apple or Spotify rush out a DJ product and, like, destroy independent music. If they try it, we should fight back.
Will labels and artists benefit?
If it sounds like I’m trying to be a cheerleader for Beatport, I’m really not. If you look at the top charts in genres, a lot of Beatport is, frankly, dreck – even with great editorial teams trying to guide consumers to good stuff. And centralization in general has a poor track record when it comes to underground music.
No, what I am biased toward is products that are real, shipping, and based on serious economics. So much as I’m interested in radical ideas for decentralizing music distribution, I think those services have yet to prove their feasibility.
And I think it’s fair to give Beatport some credit for being a business that’s real, based on actual revenue that’s shared between labels and artists. It may mean little to your speedcore goth neo-Baroque label (BLACK HYPERACID LEIPZIG INDUSTRIES, obviously – please let’s make that). But Beatport really is a cornerstone for a lot of the people making dance music now, on a unique scale.
The vision for LINK seems to be solid when it comes to revenue. Heiko again:
LINK will provide an additional revenue source to the labels and artists. The people who are buying downloads on Beatport are doing so because they want to DJ/perform with them. LINK is not there to replace that.
But I think for the reason I’ve already repeated – that the “serious” and “amateur”/wedding/beginner DJ gulf is real and not just a thing snobs talk about – LINK and WeDJ probably won’t disrupt label business, even that much to the positive. Look ahead to Rekordbox integration and the higher tiers. And yeah, I’m happy to spend the money, because I never get tired of listening to music – really.
And what if you don’t like this? Talk to your label and distributor. And really, you should be doing that anyway. Heiko explains:
Unlike other DSP’s, Beatport LINK has been conceived and developed in close cooperation with the labels and distributors on Beatport. Over the past year, new contracts were signed and all music used for LINK has been licensed by the right holders. However, if labels whose distributors have signed the new contract don’t want their catalog to be available for LINK they can opt out. But again: LINK is meant to provide an additional revenue source to the labels and artists.
Have a good weekend, and let us know if you have questions or comments. I’ll be looking at this for sure, as I think there isn’t enough perspective coming from serious producers who care about the details of technology.
Once upon a time, Propellerhead ran an ad showing a bunch of hardware synths in a trash bin to make a point. This time, we get the opposite – a KORG Polysix for Reason running back in hardware.
By now, these arguments about analog versus digital, software versus hardware are all surely irrelevant to music making. But recent developments go one step further: they produce an environment in which inventors and developers no longer have to care. The vision is, make your cool effects and synthesis code, then freely choose to run them inside a host (like Reason), inside hardware. or even on the Web.
Propellerhead showed me some of these possibilities of their Rack Extension technology when I visited them this winter and talked to their developers.
You can actually try the Web side of this right now – Europe for Reason runs in a browser. It’s not just a simulation or a demo; it’s the complete Rack Extension. That clearly offers a new take on try-as-you-buy, and allows new possibilities in teaching and documentation – all without threatening the sales model for the software:
A browser may be a strange place to experience the possibility of Rack Extensions running in hardware, but it’s actually the same idea. ELK MusicOS from MINDMusicLabs allows the same tech to run on a Linux-based operating system, on any hardware you want. So if you want self-contained instruments with knobs and faders and buttons – and no distracting screens or awkward keyboards – you can do it.
It’s not so much post-PC as it is more PC than your PC. Computers should be capable of ultra-low latency, reliable operation, even running general purpose systems. The problem is, musicians aren’t the ones calling the shots.
MusicOS can cut latency below 1ms round-trip, runs on single Intel and ARM CPUs, and has official support for VST and Rack Extensions – plus full support for connectivity (USB, WiFi, Bluetooth, 4G mobile data, and MIDI).
What was cool at Superbooth was seeing some recognizable hardware prototypes using the technology. We saw a VST plug-in just before the show from Steinberg; for the Rack Extension side of things, you get a Eurorack module version of KORG’s Polysix, using their own Component Modeling Technology. (So it is a software model, but here with hands-on control and modular connectivity.)
For now, it’s just a prototype. But Rack Extension support, like VST, is officially part of MusicOS. Now it’s just up to hardware makers to take the plunge. Based on interest we saw from CDM readers and heard around the show, there is serious market potential here.
In other words, this could be the sign of things to come. ELK’s tech works i such a way that more or less the same code can target custom hardware as desktop software. And compatible systems on a chip start around ten bucks – meaning this could be an effective platform for a lot of people. (I’m not clear on how much licensing costs; ELK ask interested developers to ‘get in touch’ so it may be negotiated case by case.)
YouTube is elevating new voices to prominence in music technology as in other fields. But the platform’s esoteric rules are also ripe for abuse – as one YouTube host claims.
The story begins with around a product, the Unison MIDI Chord Pack. This US$67 pack is already, on its surface, a bit strange. Understandably, users without musical training may like the idea of drag-and-drop chords and harmony – nothing wrong with that. But the actual product appears to be just a set of folders full of MIDI files … of, like, chords. Not real presets, but just raw MIDI chords. They even demo the product in Ableton Live, which already contains built-in chord and arpeggiator tools.
You can watch the demo video on their product page – at first, I couldn’t quite believe my eyes. They claim that this will help you to create chords “with the right notes, in the right order” without theory background – except most of the drag-and-drop material is made up of root position triads, labeled via terminology you’d need some theory to even read.
It’d be a little bit like someone selling you a Build Your Own House Construction Set that was made up of a bag of nails… and the nails were just ones they’d found lying on the ground. Maybe I’m missing something, but I definitely can’t figure out this product from their documentation.
Ave Mcree aka Traptendo, a well-known YouTube host, decided to take on the developers. Calling the product a “scam,” he says he pointed to other, free sources for the same MIDI content – meaning that, as it wasn’t actually original, at best the Unison product amounts to plagiarism.
As if it weren’t already strange enough that these developers were selling MIDI files of chords, they then responded to Ave Mcree’s video by filing a copyright claim. At this point, our story is picked up by Tim Webb at the excellent Discchord blog, who choose a nice, succinct headline:
A video about Unison Audio copyright striking my “Unison Audio Chord MIDI Pack Scam” video! This is a channel strike which is affects my monetization rights and could get my channel deleted. I don’t care if that happens because I’m not going to stand for people hustling you. It’s sad that YouTube allows shills and dishonest companies to strike honest reviewers. It’s censorship at it finest! YouTube as a company has lost all of it’s charm when it stop caring about the community on here. Do I like doing videos like these? No, but it’s necessary when people are using their influence for the wrong things. I’m not knocking their hustle by NO MEANS, but offer a product that is 100% YOURS!!!!!
What makes this story so disturbing: not only is YouTube’s lax structure vulnerable to abuse, it seems to actively encourage scammers.
The copyright claim appears to be based on the the pack included for demonstration purposes in his video. While I’m not a lawyer, this should fall dead center under the doctrine of fair use as well as the royalty free license provided by the developers themselves.
Here’s where YouTube’s scale and automation, though, collide with the intricacies of copyright law requirements (mainly in the USA, but possibly soon impacted by changes in the European Union). It’s easy to file a copyright claim, but hard to get videos reinstated once that claim is filed.
As a result: there’s almost nothing stopping someone from filing a fraudulent copyright claim just because they don’t like your video. In this case, Unison can simply use a made-up copyright claim as a tool to kill a video they didn’t like.
You can read up on this world of hurt on Google’s own site:
After all the recent fears about the EU and filtering, automated filtering doesn’t result in a strike – strikes require an explicit request. The problem is, creators have little recourse once that strike is processed. They can contact whomever made the complaint and get them to reverse it – which doesn’t work here, if Unison’s whole goal was removing the video. They can wait 90 days – an eternity in Internet time. Or they can file a “counter notification” – but even this is slow:
After we process your counter notification by forwarding it to the claimant, the claimant has 10 business days to provide us with evidence that they have initiated a court action to keep the content down. This time period is a requirement of copyright law, so please be patient.
It was only a matter of time before music and music tech encountered the problems with this system, as YouTube grows. Other online media – including CDM – are subject to liability for copyright and libel, as we should be. But legal systems are also set up to prevent frivolous claims, or attempts to use these rules simply to gag your critics. That’s not the case with YouTube; Google has an incentive to protect itself more than its creators, and it’s clear the system they’ve set up has inadequate protections against abuse.
What kind of abuse?
Fuck Jerry, the Instagram “influencer” agency that ripped off memes and helped build the ill-fated Fyre Festival, used copyright strikes to remove a video critical of its operation.
And it gets worse from there: the system can result in outright extortion, with Google proving unresponsive to complains. The Verge reported on this phenomenon earlier this year, and while Google claimed to be working on the problem, observed that even major channels needed their woes to go viral before even getting a response from the company:
This isn’t the only problem on YouTube’s platform for music and music technology. While the service is promoting new personalities, disclosure around their relationships with sponsors are often opaque. Traptendo also observes that videos toutingvarious tutorials on working with harmony may be sponsored by Unison Audio, with little or no acknowledgement of that relationship.
That same complaint has been leveled at CDM and me not to mention… okay, all the print magazines I’ve ever written for. But we at least have to answer for our credibility, or lose you as readers. (And sometimes losing you as readers is exactly what happens.) YouTube’s automated algorithms, by contrast, mean videos that simply mention the right keywords or appeal to particular machine heuristics can be promoted without any of that human judgment.
YouTube has unquestionable value, and to pretend otherwise would be foolhardy. Traptendo’s videos are great; I hope this one that was removed gets reinstated.
At the same time, we need to be aware of some of the downsides of this platform. And I’m concerned that we’ve become dependent on a single platform from a single vendor – which also means if anything goes wrong, creators are just as quickly de-platformed.
And regardless of what’s going on with YouTube, it’s also important for humans to spread the word – at least to say, friends don’t let friends spend their money on … chords.
I don’t believe all music “needs to be free,” but I would least say triads are. Actually, wait… I could use some spare spending money. Excuse me, I’m going to slip into the night to go sell some all-interval tetrachords on the black market.
Here’s Traptendo showing you how to mess with harmony in FL Studio – minus the overpriced “pack”:
Oh yeah and – check out Ave’s site on the open Web, complete with full blog:
Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.
Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:
Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)
This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.
That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)
But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.
Here’s what the creators say:
Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.
The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.
Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)
It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.
DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)
I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)
As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:
This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)
I’m really digging this one:
So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.
What’s up in other genres?
SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)
Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.
And that gives us results like You Can’t Take My Door:
Barbed whiskey good and whiskey straight.
These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.
This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)
Or there’s Dylan mixed with negative Yelp reviews from Manhattan:
And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.
We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.
Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.
My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.
If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.
But you know, I’m a marathon runner in my sorry way.
The Behringer effect: TC-Helicon, once known for its high-end gear for vocalists, is now a badge on a line of products copying the layout and design of existing IK Multimedia products.
Chinese news site Midifan has the news. (Content in Chinese language only, but the pictures tell the story reasonably well.) Midifan is careful to use the term “tribute” in describing these copies/clones or whatever you might call them, after having earned the ire of Music Group over past reporting (links below). Images here are from Midifan.
TC-Helicon and parent TC Electronic had been independent companies until acquired by MUSIC Group (aka MUSIC Tribe Global Brands, more commonly known by the name of their CEO, and their cornerstone music brand, Behringer). That news item from 2015:
The new TC-Helicon line of mobile audio / mobile guitar products uses form factors, near-identical case designs and control layouts, and in some cases even identical panel labels and symbols to products from Italian manufacturer IK Multimedia.
The products in question, and the existing products they closely resemble:
TC-Helicon GO VOCAL (IK Multimedia iRig PRE)
TC-Helicon GO TWIN (IK Multimedia iRig DUO)
TC-Helicon GO ACOUSTIC (IK Multimedia iRig Acoustic Stage)
TC-Helicon GO GUITAR PRO (IK Multimedia iRig HD 2)
The products will get IK’s attention, the branding could be taken as a shot at Japan. Roland uses the GO brand for its mobile products: GO:MIXER, GO:PIANO, and GO:KEYS. This line is both intended for use with smartphones and targeted at beginners.
Indicating they intend to try to protect that trademark, Roland filed for protection for its GO line last fall. (There’s one refusal listed, but non-final.)
Unlike some recent Behringer “tributes,” you couldn’t argue the GO Series is bringing back a decades-old analog design or making the category more affordable. Street prices on the TC line look roughly in line with pricing on the IK products that preceded them, and those prices are in turn under a hundred bucks. Major US retailers like Sweetwater, Guitar Center, and MusiciansFriend are already selling the TC products.
In terms of listed specs, the GO Series also appear to correspond exactly to the associated IK products – “clones” would be the most appropriate word. (The iRig Pre and GO Vocal, for instance, share 9V battery, jack and I/O configuration, placement, and control layout. The TC unit is slightly larger.)
This sort of precedent could harm the music products industry. Cloned products from any manufacturer could easily let competitors establish which categories are lucrative, and then save money twice over – spared the expense of designing the product itself, as well as ramping up production and determining what sells in the marketplace. In the past, what has stopped a scenario like this has been brand – a no-name clone could come along, but musicians are more likely to trust a brand they know and can easily find. But now that Music Group owns respected brands like TC-Helicon, and has distribution in the same channels, that barrier could disappear.
There are already implications for IK should this product catch on; more so, if the pattern is repeated. New product designs could be endangered, if another company can clone the design work, avoid all the risk of introducing a new product to the market, and then even slightly undercut price.
This is not to say TC doesn’t continue to represent new design – a new GO-branded mixer for audio use by streamers appears to be original (unless someone wants to dispute that). The other way this could go would be for Music Tribe to dilute the value of their own TC-Helicon brand, much as Behringer has become associated with low-cost copies in the minds of some consumers.
For now, IK’s existing market position means their products will show up first when you search, and have some customer reviews on the US sites I checked.
We’ll watch to see if there’s any legal action taken against Music Tribe over these products.
In other Behringer/Music Tribe legal news
Disputes between Music Group and other players in the industry continue to spill over into the courts.
Family-owned US maker Auratone is locked in trademark litigation with Music Tribe over the Auratone name, following the death of Auratone’s founder Jack Wilson. You can follow that case online, though there’s not yet a decision.
The Superior Court of the State of California in San Francisco County did rule that Music Group was obligated to pay over $100,000 in combined costs and legal fees to Dave Smith Instruments and DSI employee Anthony Karavidas. (California has robust anti-SLAPP protections, which are intended to stop litigation from gagging public speech.)
Some musicians see Islamic mysticism; some the metaphysics of Einstein. But whether spiritual, theoretical, or both, even one John Coltrane pitch wheel is full of musical inspiration.
One thing’s certain – if you want your approach to pitch to be as high-tech and experimental as your instruments, Coltrane’s sketchbook could easily keep you busy for a lifetime.
Unpacking the entire music-theoretical achievements of John Coltrane could fill tomes – even this one picture has inspired a wide range of different interpretations. But let’s boil it down just to have a place to start. At its core, the Coltrane diagram is a circle of fifths – a way of representing the twelve tones of equal temperament in a continuous circle, commonly used in Western music theory (jazz, popular, and classical alike). And any jazz player has some basic grasp of this and uses it in everything from soloing to practice scales and progressions.
What makes Coltrane’s version interesting is the additional layers of annotation – both for what’s immediately revealing, and what’s potentially mysterious.
Sax player and blogger Roel Hollander pulled together a friendly analysis of what’s going on here. And while he himself is quick to point out he’s not an expert Coltrane scholar, he’s one a nice job of compiling some different interpretations.
Take it with some grains of salt, since there doesn’t seem to be a clear story to why Coltrane even drew this, but there are some compelling details to this picture. The two-ring arrangement gives you two whole tone scales – one on C, and one on B – in such a way that you get intervals of fourths and fifths if you move diagonally between them.
Scanned image of the mystical Coltrane tone doodle.
Corey Mwamba simplified diagram.
That’s already a useful way of visualizing relations of keys in a whole tone arrangement, which could have various applications for soloing or harmonies. Where this gets more esoteric is the circled bits, which highlight some particular chromaticism – connected further by a pentagram highlighting common tones.
Even reading that crudely, this can be a way of imagining diminished/double diminished melodic possibilities. Maybe the most suggestive take, though, is deriving North Indian-style modes from the circled pitches. Whether that was Coltrane’s intention or not, this isn’t a bad way of seeing those modal relationships.
You can also see some tritone substitutions and plenty of chromaticism and the all-interval tetrachord if you like. Really, what makes this fun is that like any such visualization, you can warp it to whatever you find useful – despite all the references to the nature of the universe, the essence of music is that you’re really free to make these decisions as the mood strikes you.
I’m not sure this will help you listen to Giant Steps or A Love Supreme with any new ears, but I am sure there are some ideas about music visualization or circular pitch layouts to try out. (Yeah, I might have to go sketch that on an iPad this week.)
(Can’t find a credit for this music video, but it’s an official one – more loosely interpretive and aesthetic than functional, released for Untitled Original 11383. Maybe someone knows more… UMG’s Verve imprint put out the previously unreleased Both Directions At Once: The Lost Album last year.)
How might you extend what’s essentially a (very pretty) theory doodle to connecting Coltrane to General Relativity? Maybe it’s fairer to say that Coltrane’s approach to mentally freeing himself to find the inner meaning of the cosmos is connected, spiritually and creatively. Sax player and astrophysicist professor (nice combo) Stephon Alexander makes that cultural connection. I think it could be a template for imagining connections between music culture and physics, math, and cosmology.
Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.
Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.
And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.
A full ten years ago, I was writing about the Microsoft projectand interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.
For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.
We’re now on the third major revision of the camera hardware.
2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.
2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).
Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.
Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.
2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.
Active IR tracking in the dark
Wider field of vision
6 skeletons (people) instead of two
More tracking features, with additional joints and creepier features like heart rate and facial expression
1080p color camera
Faster performance/throughput (which was key to more expressive results)
Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”
And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.
Everything old is new again
I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)
It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.
It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.
So why am I even bothering to write this?
Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.
So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.
And oh yeah – the next generation looks very powerful.
Kinect: The Next Generation
Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.
That is definitely backwards from how this is normally meant to work.
But the good news here is unexpected. Kinect was lost, and now is found.
The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.
So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.
And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.
For a really good write-up, you’ll want to read this great run-down:
Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.
Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.
1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer
And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)
Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).
Downers: 30 fps operation, limited range.
Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:
That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.
And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.
All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.
Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.
So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.
Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.
Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.
I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.
Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:
Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.
Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.
Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.
You may know Magenta from its involvement in the NSynth synthesizer —
NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.
But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.
Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.
Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.
One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)
What’s in Magenta Studio
Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.
Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.
Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.
The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)
Here are your options:
Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.
Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.
Interpolate: Instead of one clip, use two clips and merge/morph between them.
Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.
Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.
So, is it useful?
It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.
More to the point with something like Magenta is, do you really get musically useful results?
Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.
Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.
One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.
I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.
The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.
And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.
And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.
Where this could go next
There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.
As Jesse Engel tells CDM:
We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.
Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.
Akai tipped their hand late last year that they were moving more toward live performance. With APC Live hardware leaked and in the wild, maybe it’s time to take another look. MPC software improvements might interest you with or without new hardware.
MPC 2.3 software dropped mid-November. We missed talking about it at the time. But now that we’re (reasonably certain, unofficially) that Akai is releasing new hardware, it puts this update in a new light. Background on that:
Whether or not the leaked APC Live hardware appeals to you, Akai are clearly moving their software in some new directions – which is relevant whatever hardware you choose. We don’t yet know if the MPC Live hardware will get access to the APC Live’s Matrix Mode, but it seems a reasonable bet some if not all of the APC Live features are bound for MPC Live, too.
And MPC 2.3 added major new live performance features, as well as significant internal synths, to that standalone package. Having that built in means you get it even without a computer.
New in 2.3:
A vintage-style, modeled analog polysynth
A bass synth
A tweakable, physically modeled electric piano
Tubesynth – an analog poly.
Electric’s physically-modeled keys.
Electric inside the MPC Live environment.
As with NI’s Maschine, each of those can be played from chords and scales with the pads mode. But Maschine requires a laptop, of course – MPC Live doesn’t.
A new arpeggiator, with four modes of operation, ranging from traditional vintage-style arp to more modern, advanced pattern playback
And there’s an “auto-sampler.”
That auto-sampler looks even more relevant when you see the APC Live. On MPC Live (and by extension APC Live), you can sample external synths, sample VST plug-ins, and even capture outboard CV patches.
Of course, this is a big deal for live performance. Plug-ins won’t work in standalone mode – and can be CPU hogs, anyway – so you can conveniently capture what you’re doing. Got some big, valuable vintage gear or a modular setup you don’t to take to the gig? Same deal. And then this box gives you the thing modular instruments don’t do terribly well – saving and recalling settings – since you can record and restore those via the control voltage I/O (also found on that new APC Live). The auto-sampler is an all-in-one solution for making your performances more portable.
Full details of the 2.3 update – though I expect we’ve got even more new stuff around the corner:
With or without the APC Live, you get the picture. While Ableton and Native Instruments focus on studio production and leave you dependent on the computer, Akai’s angle is creating an integrated package you can play live with – like, onstage.
Sure enough, Akai have been picking up large acts to their MPC Live solution, too – John Mayer, Metallica, and Chvrches all got named dropped. Of those, let’s check out Chvrches – 18 minutes in, the MPC Live gets showcased nicely:
It makes sense Akai would come to rely on its own software. When Akai and Novation released their first controllers for Ableton Live, Ableton had no hardware of their own, which changed with Push. But of course even the first APC invoked the legendary MPC legacy – and Akai has for years been working on bringing desktop software functionality to the MPC name. So, while some of us (me included) first suspected a standalone APC Live might mean a collaboration with Ableton, it does make more sense that it’s a fully independent Akai-made, MPC-style tool.
It also makes sense that this means, for now, more internal functionality. (The manual reference to “plugins” in the APC Live manual that leaked probably means those internal instruments and effects.) That has more predictability as far as resource consumption, and means avoiding the licensing issues necessary and the like to run plug-ins in embedded Linux. This could change, by the way – Propellerhead’s Rack Extensions format now is easily portable to ARM processors, for example – but that’s another story. As far as VST, AU, and AAX, portability to embedded hardware is still problematic.
The upshot of this, though, is that InMusic at least has a strategy for hardware that functions on its own – not just as a couple of one-off MPC pieces, but in terms of integrated hardware/software development across a full product line. Native Instruments, Ableton, and others might be working on something like that that lets you untether from the computer, but InMusic is shipping now, and they aren’t.
Now the question is whether InMusic can capitalize on its MPC legacy and the affection for the MPC and APC brands and workflows – and get people to switch from other solutions.