analysis

Adobe drops QuickTime support, as visual artists look for a solution

The story: Apple leaves QuickTime securities unpatched on Windows; Adobe drops support in their product line. But that leaves creative people stuck – including live visual artists. And now they’re looking for solutions.

First, here’s the sequence of events – and if you’ve been watching the general mayhem in the US government, you’d be forgiven for missing what was happening with, like, QuickTime for Windows security.

First, from the US Department of Homeland Security (really, even if the headline looks more like Macworld):

Apple Ends Support for QuickTime for Windows; New Vulnerabilities Announced [US-CERT Alert (TA16-105A)]

And from a private security firm:

Urgent Call to Action: Uninstall QuickTime for Windows Today [TrendMicro]

To follow that advice, you can perform that installation on Windows as follower (macOS users aren’t impacted):

Uninstall QuickTime 7 for Windows

That is, Apple had already dropped QuickTime for Windows development, including fixing security vulnerabilities – and this known one is bad enough to finally uninstall the software. It’s a Web-based vulnerability, so not particularly relevant to us making visuals, but significant nonetheless.

Developers should already have begun removing dependencies on QuickTime some time ago. But because of the variety of formats artists support, this starts to break some specific workflows. So here’s Adobe:

QuickTime on Windows [Adobe blog]

And before you get too smug, Mac users, you can expect some bumps in the road as cross-platform software generally tries to get out of QuickTime as a dependency. That could get messy, again, with so many formats out there. But let’s deal with Windows and Adobe software.

What works: uncompressed, DV, IMX, MPEG2, XDCAM, h264, JPEG, DNxHD, DNxHR, AVCI and Cineform), plus “DV and Cineform in .mov wrappers.”

What breaks: Among others, Apple ProRes (the big one), plus “Animation (import and export), DNxHD/HR (export) as would workflows where growing QuickTime files are being used (although we strongly advise using MXF for this wherever possible).”

Moreover, Adobe is dropping QuickTime 7 codec support on all April releases of their full CC product line:

Dropped support for Quicktime 7 era formats and codecs [Adobe support]

Adobe advises customers to move to newer codecs, but that isn’t always an option. PC World have a tough appraisal of the situation (one I’m sure Adobe could live without):

Adobe on QuickTime: You’re up the creek without a paddle [PC World]

That’s by Gordon Mah Ung, the editor who has been around this business long enough not to mince words.

David Lublin of Vidvox writes CDM to let us know that in the short term, this also impacts Adobe software support for their high performance, open Hap format (plus DXV and many other legacy codecs VJs may tend to use). I also spoke with Mark Conilgio of Isadora, who said he was sad to see QuickTime support go, and that it would prevent cross-platform file support, Isadora 3 will remove QuickTime dependencies and work with native file formats on the respective platforms.

Hey, Adobe: Get Hap!

A silver lining: this may be a chance to “shake the tree” and convince Adobe to add native support for Hap, a high performance format that leverages your GPU to delivery snappy playback, ideal for live and interactive visual applications. And given that’s an open source format, and unlike anything else available, that’d be great. There’s already a proposal online to make that (hap)pen:

https://adobe-video.uservoice.com/forums/911311-after-effects/suggestions/33853372-support-the-hap-codec

Hap was built in collaboration with talented developer Tom Butterworth. And Adobe has incorporated his code before: in 2016, Character Animator added support for Syphon, the inter-app visual texture pipeline on Mac:
https://www.adobe.com/products/character-animator/features.html

Work with Hap right now

For Hap support – and you really should be working with it – here are some immediate solutions.

Encoding to Hap from the command line using FFmpeg

Converting movies to the Hap video codec

But I’d love to see Adobe support the format. It’s just a codec; there’s no real UX requirement, and the code is there and flexibly licensed.

Meanwhile, perhaps this is a nice illustration of how important it is that live visual art move to open, cross-platform de facto standards. It makes work and art future proof and portable, and removes some overhead for developers making both free and commercial tools. And given that computers are based on many of the same architectures, it makes sense for the ways we store video and express graphical information to be portable and standardized.

For Vidvox’s part, there’s a nice summary on their page of what they support – and a lot of the formats they’re championing can be used by developers on Windows and Linux, not just macOS:

Open Source At VIDVOX

The post Adobe drops QuickTime support, as visual artists look for a solution appeared first on CDM Create Digital Music.

Adobe drops QuickTime support, as visual artists look for a solution

The story: Apple leaves QuickTime securities unpatched on Windows; Adobe drops support in their product line. But that leaves creative people stuck – including live visual artists. And now they’re looking for solutions.

First, here’s the sequence of events – and if you’ve been watching the general mayhem in the US government, you’d be forgiven for missing what was happening with, like, QuickTime for Windows security.

First, from the US Department of Homeland Security (really, even if the headline looks more like Macworld):

Apple Ends Support for QuickTime for Windows; New Vulnerabilities Announced [US-CERT Alert (TA16-105A)]

And from a private security firm:

Urgent Call to Action: Uninstall QuickTime for Windows Today [TrendMicro]

To follow that advice, you can perform that installation on Windows as follower (macOS users aren’t impacted):

Uninstall QuickTime 7 for Windows

That is, Apple had already dropped QuickTime for Windows development, including fixing security vulnerabilities – and this known one is bad enough to finally uninstall the software. It’s a Web-based vulnerability, so not particularly relevant to us making visuals, but significant nonetheless.

Developers should already have begun removing dependencies on QuickTime some time ago. But because of the variety of formats artists support, this starts to break some specific workflows. So here’s Adobe:

QuickTime on Windows [Adobe blog]

And before you get too smug, Mac users, you can expect some bumps in the road as cross-platform software generally tries to get out of QuickTime as a dependency. That could get messy, again, with so many formats out there. But let’s deal with Windows and Adobe software.

What works: uncompressed, DV, IMX, MPEG2, XDCAM, h264, JPEG, DNxHD, DNxHR, AVCI and Cineform), plus “DV and Cineform in .mov wrappers.”

What breaks: Among others, Apple ProRes (the big one), plus “Animation (import and export), DNxHD/HR (export) as would workflows where growing QuickTime files are being used (although we strongly advise using MXF for this wherever possible).”

Moreover, Adobe is dropping QuickTime 7 codec support on all April releases of their full CC product line:

Dropped support for Quicktime 7 era formats and codecs [Adobe support]

Adobe advises customers to move to newer codecs, but that isn’t always an option. PC World have a tough appraisal of the situation (one I’m sure Adobe could live without):

Adobe on QuickTime: You’re up the creek without a paddle [PC World]

That’s by Gordon Mah Ung, the editor who has been around this business long enough not to mince words.

David Lublin of Vidvox writes CDM to let us know that in the short term, this also impacts Adobe software support for their high performance, open Hap format (plus DXV and many other legacy codecs VJs may tend to use). I also spoke with Mark Conilgio of Isadora, who said he was sad to see QuickTime support go, and that it would prevent cross-platform file support, Isadora 3 will remove QuickTime dependencies and work with native file formats on the respective platforms.

Hey, Adobe: Get Hap!

A silver lining: this may be a chance to “shake the tree” and convince Adobe to add native support for Hap, a high performance format that leverages your GPU to delivery snappy playback, ideal for live and interactive visual applications. And given that’s an open source format, and unlike anything else available, that’d be great. There’s already a proposal online to make that (hap)pen:

https://adobe-video.uservoice.com/forums/911311-after-effects/suggestions/33853372-support-the-hap-codec

Hap was built in collaboration with talented developer Tom Butterworth. And Adobe has incorporated his code before: in 2016, Character Animator added support for Syphon, the inter-app visual texture pipeline on Mac:
https://www.adobe.com/products/character-animator/features.html

Work with Hap right now

For Hap support – and you really should be working with it – here are some immediate solutions.

Encoding to Hap from the command line using FFmpeg

Converting movies to the Hap video codec

But I’d love to see Adobe support the format. It’s just a codec; there’s no real UX requirement, and the code is there and flexibly licensed.

Meanwhile, perhaps this is a nice illustration of how important it is that live visual art move to open, cross-platform de facto standards. It makes work and art future proof and portable, and removes some overhead for developers making both free and commercial tools. And given that computers are based on many of the same architectures, it makes sense for the ways we store video and express graphical information to be portable and standardized.

For Vidvox’s part, there’s a nice summary on their page of what they support – and a lot of the formats they’re championing can be used by developers on Windows and Linux, not just macOS:

Open Source At VIDVOX

The post Adobe drops QuickTime support, as visual artists look for a solution appeared first on CDM Create Digital Music.

SoundCloud adds scheduled releases, in creator and podcaster focus

SoundCloud last summer rescued its future financially and promised a renewed focus on creators. The first small steps of that creator focus are starting to appear – including one small but useful addition.

Last week, SoundCloud rolled out scheduled releases. From the ‘edit’ page on a track, you can choose a date and time to publish content publicly. They also have suggested some possible use cases – no doubt heard from their wealth of users:

  • Podcasts
  • Mixes that release at the end of a live set
  • Scheduling across time zones on tour

You need an Unlimited Pro account to use the feature (but this is probably only of use to heavy users, anyway).

For most producers and labels, I suspect the major uses here will be timing publication of full streams with the release of an album, and scheduling those around exclusive streams for the press. (I’ll definitely be using that functionality myself.)

Functionality like this really matters. Much of the (understandable) frustration with SoundCloud came from users who were closely tied to specific features in apps and the Web platform, then had those features suddenly taken away. That might cut it for general consumer Web services, but serious users of creative products build elaborate workflows over time, and they’re going to remain loyal to providers who are mindful of their needs. In their push to growth, it’s clear SoundCloud didn’t entirely maintain that relationship. Now we’ll see if some attention to functionality can rebuild it.

In the meantime, users and content are still widely on SoundCloud, whatever hysterical headlines you’ve read. That is, for all the complaining, a lot of us rely on SoundCloud in absence of any real competition. That’s not just because SoundCloud has a big user base of potential listeners – it’s also because for many users, SoundCloud’s feature set is more convenient. (Narrowly focused on just adding players, that may not be true, but if you want a player with a network behind it, it’s another story.)

In other words, whatever the service, features make a difference. I’d also like to see this kind of functionality on Bandcamp. Label management and navigation there to me is simply failing to improve, with some major oversights that can make operating it a pain. (In fact, you can’t schedule releases on Bandcamp. You can add a release date, but you have to manually publish. Maybe there’s a rationale for that particular instance, but… that’s just one example.)

SoundCloud first?

The other ploy from SoundCloud in March to win back serious creator users is the “#scfirst” campaign.

Users who tag their music with #SCFIRST have a shot at being included on a “First on SoundCloud” playlist, and SoundCloud Twitter, Facebook, and newsletter promotion. SoundCloud also says they’ll “fast track” consideration for opening up Premier monetization which shares revenue from streams.

They also launched the service with a marketing campaign built to showcase a selection of artists using the program. Those users also highlighted music acts using SoundCloud and interaction with fans there as part of how they built up their reputation. (Most were selected from the USA, with a couple from western Europe, which tells you something about SoundCloud’s present market focus – or who’s likely paying for their subscriptions.)

On this one, it seems a bit too early to judge. Spotify and iTunes each do editorial promotion of some music, and exclusives are often part of the deal – but as with many other aspects of those services, individual producers have a very low probability of being picked up. (Bigger labels and distributors tend to fight hard to get those spots for themselves.) So here, there seems a chance – however narrow – for the independent musician and label to even out the odds. Then there are stores like Beatport that will also likely tie exclusives to particular promotion.

Juggling exclusives is itself a bit frustrating, though. Synchronizing availability is generally what artists and labels would tend to want to do to maximize exposure; getting tied down to exclusives is essentially gambling away that control and wide audience in the hopes the exclusive partner will make up the difference. And in the case of “#scfirst,” it’s really a shot in the dark, since by definition you don’t know in advance whether you’ll be picked up.

Then again, “shot in the dark” pretty much sums up the process of releasing music in general.

As with the new feature focus, it’s really up to SoundCloud to demonstrate that this stuff will pay off for users. What I will say about SoundCloud the service is, they have the potential to be an ally of the people making the music and releasing it independently. That’s just not true of a lot of what’s in the online music space.

The post SoundCloud adds scheduled releases, in creator and podcaster focus appeared first on CDM Create Digital Music.

An online and mobile DAW called BandLab just acquired Cakewalk’s IP

Cakewalk may not be all dead. A developer of online and mobile music creation tools has snapped up the former PC DAW maker’s complete intellectual property.

As I wrote earlier this week, Gibson Brands, the guitar maker-turned-wannabe consumer electronics giant, is hard up for cash. So, while they discontinued operation of their Cakewalk division, apparently they had not found a buyer for one of pro audio’s biggest names.

That changes today. Signapore-based BandLab announced they’ve acquired the “complete” intellectual property and “certain assets” in a deal with Gibson. There’s no word on what those assets are, and BandLab say they’re not making any additional announcement about the specifics – so we don’t know how much cash Gibson got or what those assets were. If the Nashville Post numbers are correct, it seems this will make little difference to Gibson’s debts, but that’s another story.

So Cakewalk’s codebase, product line, trademarks, everything go to BandLab. BandLab also has confirmed to CDM that some former Cakewalk team members will join the new company. (That itself is big news.)

And there’s some relief here: all those thirty years of accumulated expertise in making music software may not go entirely to waste.

BandLab is a familiar idea. There’s a mobile app with multiple tracks, automatic pitch correction, guitar/bass/vocal effects, and cloud sync, plus a grid-style riff interface and more traditional track layout. And there’s a free online tool you can use to collaborate with other people on the Internet and DAW features.

BandLab’s browser-based DAW.

Of the two, it’s the online DAW that looks most interesting, at least in that it’s more ambitious about incorporating desktop tools than some rivals. There’s built-in time stretching, automation, a guitar amp, and virtual instruments, for instance. I’m impressed on paper at least – I hadn’t heard of BandLab before today, to be honest, though it’s easy to lose track of various competing online solutions out there, since they tend to be somewhat similar.

And that raises the question – what’s the Cakewalk angle for BandLab?

I presumed on first blush this would be limited to assets relevant to their existing mobile products, but it seems it’s more than that. From the official press statement, it sounds as though you’ll see Cakewalk’s line of software – possibly including the flagship DAW SONAR, virtual instruments, and other tools – continue under the BandLab name. That’s been the case with other acquisitions of media creation software, if with mixed results in terms of development pace. From the press statement:

The teams at both Gibson and BandLab felt that Cakewalk’s products deserved a new home where development could continue. We are pleased to be supporting Cakewalk’s passionate community of creators to ensure they have access to the best possible features and music products under the BandLab Technologies banner.

[emphasis mine]

Then there’s the product that was just seeing the light of day right when Gibson shuttered Cakewalk operations, the one with the unintentionally ironic name:

https://momentum.cakewalk.com/

Momentum even looks quite a bit like BandLab’s mobile app. The mobile app and cloud sync solution runs on iOS and Android, with four-track recording, editing, looping and effects. And it cleverly captures ideas as recordings (via something with the dreadful name “Ideaspace”), then makes them available everywhere.

Momentum also has something that BandLab lacks – a VST/AU/AAX plug-in for Mac and Windows. Here’s the thing: it’s all fine and well to start talking about making music making easier, and reaching people with phone and browser apps. But even though big desktop DAWs don’t look terribly friendly, they’re still reasonably popular. Ableton Live alone has a user base the size of most major cities. Adding that plug-in could bridge Cakewalk’s product line and other desktop products with BandLab’s own mobile solutions.

And it’s not just the plug-in – Momentum also had an integrated cloud sync service and server-side infrastructure. (Plus don’t forget the ScratchPad iOS app. Well… maybe.)

BandLab’s mobile apps might be complemented either by Cakewalk’s mobile/cloud offerings or desktop products – or both.

So, we’ll see what BandLab are planning. Of course, the nostalgic part of me wants to see some of the soul of Cakewalk in what they do.

It seems from the way BandLab are handling the announcement that they share some of the same emotional attachment to Cakewalk that a lot of us do. For evidence, see what they’ve done to Cakewalk’s website, where there’s a headline reading:

“The news you’ve all been hoping for…”

Follow through to their own http://cakewalk.bandlab.com landing page for the acquisition, and there’s a charming ASCII art reading Cakewalk and a line reading “Cakewalk is dead. Long live Cakewalk!”

I’ve asked if any of the former Cakewalk team are joining the new effort. That would inspire more confidence than just selling these DAWs with minimal updates as-is. BandLab for their part promise a product roadmap and other details soon.

http://cakewalk.bandlab.com

So yeah, Cakewalk? Dead?

The post An online and mobile DAW called BandLab just acquired Cakewalk’s IP appeared first on CDM Create Digital Music.

MIDI evolves, adding more expressiveness and easier configuration

It’s been a long time coming, but MIDI now officially has added MPE and “capability inquiry,” opening up new expression and automatic configuration.

MIDI, of course, is the lingua franca of music gear. AKA “Musical Instrument Digital Interface,” the protocol first developed in the early 80s and has been a common feature on computers and gear and quite a few oddball applications ever since. And it’s a bit of a myth that MIDI itself hasn’t changed since its 80s iteration. Part of that impression is because MIDI has remained backwards compatible, meaning changes haven’t been disruptive. But admittedly, the other reason musicians think about MIDI in this way is that the stuff they most use indeed has remained fairly unchanged.

Engineers and musicians alike have clamored for expanded resolution and functionality ever since MIDI’s adoption. The announcements made by the MIDI Manufacturers Association aren’t what has commonly been called “HD MIDI” – that is, you don’t get any big changes to the way data is transmitted. But the announcements are significant nonetheless, because they make official stuff you can use in real musical applications, and they demonstrate the MMA can ratify official changes (with big hardware maker partners onboard). Oh, and they’re really cool.

Standardizing on new expressive ways of playing

First, there’s MIDI Polyphonic Expression, aka MPE. The name says it all: it allows you to add additional expression to more than one note at a time. So, you’ve always been able to layer expression on a single note – via aftertouch, for instance – but now instead of just one note and one finger, an instrument can respond to multiple notes and multiple fingers independently. That means every fingertip on an instrument like the ROLI Seaboard can squish and bend, and a connected sound instrument can respond or a DAW can record the results.

Hardware has found ways of hacking in this support, and plug-ins that require complex per-note information (think orchestral sound libraries and the like) have had their own mechanisms. But now there’s a single standard, and it’s part of MIDI.

MPE is exciting because it’s really playable, and it’s already got some forward momentum. Major DAWs like Logic and Cubase support it, as do synths like Native Instruments’ Reaktor and Moog’s Animoog. Hardware like the ROLI gear and Roger Linn’s Linnstrument send MPE, but there’s now even hardware receiving it, too, and translating to sound – even without a computer. (That’s not just weird keyboards, either – Madrona Labs’ Soundplane showed this could work with new instrument interfaces, too.)

Making MPE official should improve implementations already out there, and standardize inter-operability. And it means no more excuses for software that hasn’t picked it up – yeah, I’m looking at you, Ableton. Those developers could (reasonably) say they didn’t want to move forward until everyone agreed on a standard, to avoid implementing the thing twice. Well, now, it’s time.

More demos and product compatibility information is in the news, though of course this also means soon we should do a fresh check-in on what MPE is and how to use it, especially with a lot of ROLI hardware out there these days.

MIDI Polyphonic Expression (MPE) Specification Adopted!

Making instruments self-configure and work together

MPE you might have heard of, but there’s a good chance you haven’t heard about the second announcement, “Capability Inquiry” or MIDI-CI. In some ways, though, MIDI-CI is the really important news here – both in that it’s the first time the MIDI protocol would work in a new way, and because it involves the Japanese manufacturers.

MIDI-CI does three things. Here’s their official name, plus what each bit means:

1. Profile configuration – “Hey, here’s what I am!”. Profiles define in advance what a particular instrument does. Early demos included an “Analog Synth” and a “Drawbar Organ” draft. You already know channel 10 will give you drum sounds, and General MIDI drum maps will put a kick and a snare in a particular place, but you haven’t been able to easily control particular parameters without going through your rig and setting it up yourself.

2. Property exchange – save and recall. If configuration tells you what a device is and what it does, the “exchange” bit lets you store and recall settings. Last week, manufacturers showed gear from Yamaha, Roland, and Korg having their instrument settings saved and recalled from a DAW.

MMA say the manufacturers demonstrated “total recall.” Awesome.

3. Protocol negotiation – the future is coming. Actually, this is probably the most important. Profile configuration and property exchange, we’ll need to see in action before we can judge in terms of utility. But protocol negotiation is the bit that will allow gear now to build in the ability to negotiate next-generation protocols coming soon. That’s what has been commonly called “HD MIDI,” and what hopefully will bring greater data resolution and, ideally, time stamps. Those are features that some have found in alternative protocols like Open Sound Control or in proprietary implementations, but which aren’t available in standard MIDI 1.0.

And this “negotiation” part is really important. A future protocol won’t break MIDI 1.0 compatibility. Gear built now with protocol negotiation in mind may be able to support the future protocol when it arrives.

As musicians, as hackers, as developers, we’re always focused on the here and now. But the protocol negotiation addition to MIDI 1.0 is an essential step between what we have now and what’s coming.

No gear left behind

For all the convervatism of musical instruments, it’s worth noting how different this is from the rest of electronics. Backwards compatibility is important for musical instruments, because a musical instrument never really becomes outmoded. (Hey, I spent long, happy evenings singing with some violas da gamba. Trust me on this.)

The MIDI-CI adoption process here, while it’s not the most exciting thing ever, also indicates more buy-in to the future of MIDI by the big Japanese manufacturers. And that finally means the AMEI is backing the MMA.

Say what?

While even many music nerds know only the MIDI Manufacturers Association, significant changes to MIDI require another organization called the Association of Musical Electronics Industries – AMEI. The latter is the trade group for Japan, and … well, those Japanese manufacturers make gear on a scale that a lot of the rest of the industry can’t even imagine. Keep in mind, while music nerds drool over the Eurorack modular explosion, a whole lot of the world is buying home pianos and metronomes and has no idea about the rest. Plus, you have to calculate not only a different scale and a more corporate culture, but the fact that a Japanese organization involves Japanese culture and language. Yes, there will be a gap between their interests and someone making clever Max/MSP patches back in the States and dreaming of MIDI working differently.

So MIDI-CI is exciting both because it suggests that music hardware will communicate better and inter-operate more effectively, but also in that it promises music humans to do the same.

But here again is where the craft of music technology is really different from industries like digital graphics and video, or consumer electronics, or automobiles, or many other technologies. Decisions are made by a handful of people, very slowly, which then result in mass usage in a myriad of diverse cultural use cases around the world.

The good news is, it seems those decision makers are listening – and the language that underlies digital music is evolving in a way that could impact that daily musical usage.

And it’ll do so without breaking the MIDI we’ve been using since the early 80s.

Watch this space.

http://midi.org/

The post MIDI evolves, adding more expressiveness and easier configuration appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Native Instruments got a huge chunk of investment to grow

Big industry news last week: Native Instruments, purveyors of Traktor, Maschine, Reaktor, and Komplete, got 50 million Euros. Let’s make sense of that.

NI apparently wanted a reveal here. With Amsterdam Dance Event looking more like Pioneer turf these days – that company is dominant with CDJs and mixers and now even turntables, and had its own sampler on hand – NI got the attention of DJs at the keynote.

But what does it mean that the Berlin-based company got 50 million Euros? Well, some points to consider:

50 million is a lot. This is a lot for a company in the musical instruments sector of the business. Our quiet little corner of stuff for electronic musicians has begun to see some action, it’s true. For instance, Focusrite PLC (parent of Novation) made an initial public offering in 2014, and ROLI saw an unprecedented $27 million Series B funding back in 2016.

But 50 million euros opens up the possibility of significant investment. (Despite all that cash, NI retains its private ownership.)

The money is coming from a firm linked to music and pop stars. Billboard wrote the best piece I’ve seen on this yet:

Native Instruments Raises $59 Million From EMH Partners

So who are EMH? Well, they’re led by a bunch of white German guys in matching blue suits, who look like the people at the front of the queue for first class on your Lufthansa flight or an a Capella vocal quartet, or both.

But, apart from that, EMH are fairly interesting. You won’t gather much from their Website. (Example: they say they have “a special focus on consumer, retail, software, technology, financial services, business services, and health care.” That… doesn’t narrow it down much.)

Blue [Suit] Man Group. Why are these men smiling? Well, apparently their funds help digital services to grow – now including whatever NI are planning next.

I can translate, though. They help companies offering digital services grow. And they’ve got money to do it. The clients may shift – one of their previous big investments was in a tire e-service company (those round rubber things on cars), called Tirendo. And there was a search engine for vacation rentals. Plus a company with really futuristic lights.

NI were ahead of the curve on figuring out software would help musicians. They started simple – with things like a Hammond organ emulation and guitar effects. So now, it seems the gamble is what services would extend to larger groups of musicians.

NI will probably hire people. The one concrete piece of information: expect NI to hire new people to support new growth. So this is really about human investment.

NI already are established and successful. It’s also worth saying, NI aren’t a startup. They have not one, but multiple successful product lines. They’re established around the globe, in both software in hardware. They’re not getting investment because they’re burning cash and need to keep the lights on (cough, SoundCloud). This is money that can go directly into growth – without threatening the existing business.

So, about that growth —

What are they going to spend this on? This part is unclear, but you can bet on “services” for musicians, with musicians defined more broadly than the audience NI reaches now. This most important parts of the press release NI sent last week deal with that – and mention “breaking down the barriers to music creation.”

Over the past 12 months the company has made key hires in Berlin and Los Angeles, including the former CEO of Beatport, Matthew Adell. These specialized teams have commenced development of new digital services designed to redefine the landscape of music creation and the surrounding industry over the next year.

Here was my commentary on Adell at the time:

What does it mean that NI bought a startup that monetizes remixes?

Service – for what? Here’s the mystery: what will these services actually do?

It seems that the means of breaking down barriers – and playing on relationships with the likes of “Alicia Keys, Skrillex and Carl Cox” (mentioned in the press release) – is all about letting people remix music.

Of course, this makes yesterday’s news from ROLI seem a little desperate, as their initial remix offering just covers that earworm you finally got out of your head about a year ago, Parrell Williams’ “Happy.” NI have a significant headstart.

But it should also raise some red flags: that is, NI have the contacts, the brains, and the money, but what problem will they solve for music lovers, exactly? Dreams of growth do often hit up against simple realities of what consumers actually do turn out to want and what they want to pay for.

There’s not much in the Magic Eight Ball here now, though, so – let’s see what the actual plan is. (It could also be that this has nothing to do with remixes at all, and the value of Adell is unrelated to his previous gig in remix monetization.)

NI aren’t alone in services, either. Apart from Roland’s somewhat strange Cloud offering (which is mainly a subscription plug-in offering with some twists), Cakewalk now have something called Momentum – a subscription-based service and mobile/desktop combination that promises to take ideas captured on your phone and easily load them into your DAW.

What are these NI executives actually saying with these words?

Daniel Haver, CEO, isn’t helping here – he says the new target is “increasingly diverse market segments.”

Or, to translate, “like, a bunch more different people.” (Fair. There is demand from a bunch more of y’all people folks – and I’m not even kidding.)

Mate Gallic, the CTO/founder – and someone whose past life as an experimental electronic artist will be familiar to CDM readers – also has learned to speak corporate.

“We believe music creation products and services should be integrated in a more appealing, intuitive and cohesive way,” Mate Galic, CTO and President of Native Instruments, said in a statement. “We foresee an easily accessible music creation ecosystem that connects user centric design, with powerful technology and data, to further enable the music creators of today, and welcome the new creators of tomorrow.”

(Don’t worry, Mate and Daniel do talk like normal human beings outside of company press releases!)

Translation: they want to make stuff that works together, and it’ll use data. Also fair, though some concerns, Mate: part of what makes music technology beautiful is that the “ecosystem” doesn’t come from just one vendor, and some of it is intentionally left unintuitive and non-cohesive because people who make music find its anarchy appealing. You could also take the words above and wind up with a remix app that uploads to the cloud, a combination of Facebook and a DAW, or… well, almost anything.

So, they’ll be spending 50 million on a service that does something for people. Music people. Guess we have to wait and see. (Probably the one thing you can say is, “service” implies “subscription.” Everything is Netflix and Amazon Prime now, huh?)

The big challenge for the whole industry right now is: how do we reach more people without screwing up what we’ve already got? With new and emerging audiences, how do you work out what people want? How do you bridge what beginners want and need with what an existing (somewhat weird) specialized audience wants and needs?

For NI, of course, I’m sure all of us will watch to make sure that this supports, rather than distracts from, the tools we use regularly. (It’d certainly be nice to finally see a TRAKTOR overhaul, and I don’t know if there’s any connection of its fate to what we’re seeing here – very possibly not.)

I’ll be sure to share if I learn more, when the time is right. I am this company’s literal next-door neighbor.

The post Native Instruments got a huge chunk of investment to grow appeared first on CDM Create Digital Music.

Pioneer made a CDJ-shaped sampler – what does that mean for DJs?

Japanese giant Pioneer continue their march to expand from decks and mixers into live tools for DJs. The latest: a sampler in the form of the ubiquitous CDJ.

This isn’t Pioneer’s first sampler/production workstation. The TORAIZ SP-16 already staked out Pioneer’s territory there, with the same 4×4 grid and sampling functions. And the SP-16 is really great. I had one to test for a few weeks, and while these things are pricey and more limited in functionality than some of the competition from Elektron and Akai, they’re also terrifically simple to use, have great build quality, feature those lovely Dave Smith analog filters, and of course effortlessly sync to other Pioneer gear. So it’s easy for loyal owners of other gear to laugh off the pricey Pioneer entry. But its simplicity for certain markets is really an edge, not a demerit.

The DJS-1000 appears to pack essentially those same features into the form factor of a CDJ. And it lowers the price relative to the SP-16. (suggested retail is €1299 including VAT, so somewhere in the US$1200 range or less)

Features, in a nutshell. Most are copied directly from the SP-16:

  • 16 step color-coded input keys, step sequencer
  • Touch strip
  • 7-inch full-color touchscreen, with three screens – Home, Sequence, Mixer
  • Live sampling from inputs
  • FX: echo, reerb, filter, etc. (digital filter, not I think the Dave Smith analog filter on the SP-16)
  • MIDI clock sync, Beat Sync (with PRO DJ LINK for the latest CDJ/XDJ)

And it even loads project files right off the SP-16 – so you can make projects at home, then tote them to the club on USB stick. But there a couple of additions:

  • Tempo slider, nudge for turntable-style sync by hand
  • Form factor that sits alongside the CDJ-2000NXS2 and DJM-900NXS2
  • Support for DJS-TSP Project Creator2 – “easily create projects and SCENE3 files on a PC/Mac”

But… wait, would you actually want a sampler shaped like a CDJ?

There are a few benefits to borrowing the CDJ’s form factor. Of course, you elevate the controls to the same height as turntables and other CDJs, and tilt up the screen. (Those viewing angles were pretty good, but this is still easier to see in a booth. Oh, yeah – Pioneer’s slogan for this thing is even “elevate the standard,” which you can take two ways!)

That to me actually isn’t the most interesting feature, though. Adding a big tempo control means you can actually ignore that sync selling point and manually nudge drum patterns in tempo with other music. Now, that’s not to say that’s something people will do, but I’d love to see more manual controls for feel and tempo on machines. (Imagine tactile control over the components of a rhythm pattern, for instance. We’ve mostly envisioned a world in which our rhythmic machines are locked to one groove and left there; it doesn’t have to be that way.)

Giving DJs a familiar control layout (well, bits of it, anyway), is also a plus in certain markets.

But all of this is aimed at much at the club as it is the DJ. Looking at this thing, it’s apparent what Pioneer are hoping is that clubs begin to buy samplers alongside decks. That enclosure, apart from saving some costs for Pioneer through standardization, is a big advert that screams “you bought CDJs, now buy this.”

That leap isn’t inevitable, though. The form factor that makes the DJS-1000 smart for a club doesn’t necessarily make sense in the studio. I might buy a square SP-16 for less money, but … not a DJS-1000, because it’s now ungainly and big and completely absurd to travel with. A lot of DJs would buy a CDJ for their studio to practice on – but Pioneer doesn’t really make one that fits those DJs budgets. (Ironically, the DJs who could afford buying their own CDJs – the ones gigging all the time – have enough hours on the CDJ that I don’t know even one who has bought decks for themselves. I think they’re glad to have a vacation from the damned things.)

The fundamental question remains: will DJs actually start playing live or hybrid sets? The gamble Pioneer is making is “build it and they will come,” effectively.

I’ve tried to find out if the Toraiz range are having that impact. Certainly, some DJs are buying them. A lot of producers, are, too – particularly the lovely AS-1 synth, which holds its own with competing synths so well that you can easily forget the Pioneer logo is even there.

But there’s still a big, big divide between live acts and DJs. Most producers playing live will want to arrange their own gear. And once you’re playing live, even if you decide to play a hybrid set, you’re more likely to want to augment the live set with CDJs than to switch to Pioneer for samplers. You wouldn’t buy a DJS-1000, probably, given the whole Elektron range is as affordable or cheaper – the Digitakt is half the price of this, does more, and is more portable.

But if Pioneer isn’t selling to you, but to clubs, then you can figure the strategy is this:

1. Get SP-16 owners to bring a USB stick and plug into the DJS-1000 they find in clubs – that’s cool.

2. Get DJs preparing sampled sets on computers, then bringing them on USB sticks. That’s huge.

3. Get some DJs who haven’t worked much with samplers to toy around with the ones they find appearing in booths – the gateway drug effect.

#3 is more unpredictable; #1 and #2 aren’t. And don’t underestimate the power of Pioneer’s massive sales and marketing operation, which does extensive outreach to clubs and artists. That “industry standard” thing didn’t just happen accidentally.

Pioneer hopes clubs will invest in something like this press photo, of course.

I don’t think this means Pioneer will become an industry standard in live gear. But it does help them to expand beyond just decks, and ironically could help vendors like Elektron who are more live focused.

The real question isn’t for Pioneer, then: it’s for Native Instruments and Ableton. Home and studio use still seem to benefit from computer-software combinations. But the competition in live use is increasingly standalone hardware. We’ll see if the two Berlin software giants’ bet that people will still want to work with computers was a smart one — or if it means missed opportunities for Maschine and Live/Push. (TRAKTOR, for its part, has clearly lost ground. I’d love to see a TRAKTOR 3 that worked as portable standalone hardware, a deck combo you could take anywhere, but I’m not so optimistic.)

But I fully expect some of these DJS-1000s to start showing up in the nicer booths around.

The post Pioneer made a CDJ-shaped sampler – what does that mean for DJs? appeared first on CDM Create Digital Music.

SoundCloud, now Vimeo of Sound, instead of YouTube of Sound?

SoundCloud’s do-or-die moment came Friday – and it seems it’s do, not die. The company now takes on new executives, and a new direction.

First, it’s important to understand just what happened yesterday. Despite some unhinged and misleading blog reports, the situation didn’t involve the site suddenly switching off – following the layoffs, the company said it had enough cash to survive through the end of the fourth quarter. That said, the concern was, without reassurances the company could last past that, SoundCloud could easily have slipped into a death spiral, with its value dropping and top talent fleeing a sinking ship.

What happened: New investment stepped in, with a whopping US$169.5 million, for SoundCloud’s biggest round ever (series F). That follows big past investments from Twitter, early venture funding, and debt financing last year.

This gives the company a new direction, some new leadership and leadership experience, and the stability to keep current talent in the building.

Under new management

What changes: Plenty. When you invest that much money, you can get some changes from the company to ensure you’re more likely to get your investment back.

  • New CEO: Kerry Trainor (formerly CEO of Vimeo)
  • New COO: Mike Weissman (formerly COO of Vimeo)
  • New board members: Trainor joins the board, alongside Fred Davis (a star investor and music attorney), and Joe Puthenveetil (also music-focused), each coming from Raine (the firm that did the deal).
  • A much lower valuation: In order to secure funding, SoundCloud adjusted what had been at one point a $700 million valuation to a pre-investment $150 million. That’s not much above its annual run rate, and it indicates how far they’ve fallen.
  • …but maybe we don’t do this runway thing any more. The good news – TechCrunch reports the company says it has a $100 million annual run-rate. This investment means they’re not in urgent need of cash. They’ve bought themselves time to genuinely become a money making business, instead of constantly needing to go back to investors for money. (“Dad??? Can I borrow $70 million?”)

What stays the same:

  • SoundCloud as you know it keeps running. (Meaning, if you aren’t terribly interested in the business story here, carry on uploading and forget about it!)
  • Eric Wahlforss stays on. The co-founder’s title is adjusted to “Chief Product Officer” instead of CTO, but it appears he’ll retain a hands-on role. That’s important, too, because no one knows the product – or how it’s used by musicians – than Eric does. It’s easy to criticize the executive team, but if you’re a current user, this is good news. (Just bringing in some Vimeo people and dumping the people running the product would have almost been very bad for the service you use.)

Now, most headlines are focusing on the cash lifeline, and that’s absolutely vital. But this is a major talent injection, too. Fred Davis is one of the key figures in New York around music and tech, from his role as an attorney to as an investor. (He was known to float around hackdays, too.) Oh, yeah – he’s also the son of Clive Davis, who started NYU’s music business school. Puthenveetil also represents some significant expertise in the area.

Kerry Trainor is about the single most experienced person you could find to lead SoundCloud – more so, in fact, than the executives who have steered the company before. His streaming experience, as SoundCloud points out in their press release, spans back 20 years. (They leave out the names, because kids don’t like AOL, Yahoo Music, or Launch Media any more, but experience matters.) And he is largely credited with making Vimeo a profitable company.

What’s the future of SoundCloud now?

For all the skepticism, Alex seems to have delivered on exactly the promises he’s been making in past weeks, vague as they may have seemed. SoundCloud does appear ready to re-focus on creators, and the financing means ongoing independence is a real possibility.

Whether it works or not, it’s tough to overstate what a significant shift in direction this represents. For years, people have casually referred to SoundCloud as the “YouTube of audio.” (Oddly, the phrase I first wrote when they started was a “Flickr or audio,” which, uh, dates that story. But it does also indicate creators, not consumers, were initially the focus, so I at least go that bit right.)

It seems SoundCloud aren’t just bringing on former Vimeo executives. They seem poised to follow Vimeo’s example.

We already know that endlessly expanding scale and more streaming is a disastrous business model. The issue is, if listeners aren’t paying, and any royalties are accruing, the more people listen, the more money you lose. Spotify is facing that now and may need a similar change in direction, and the entire music industry is caught up in this black hole. Companies like Google and Apple can absorb the losses if they choose; an independent company can’t.

So scale alone isn’t the answer. And just having more listeners doesn’t necessarily mean the kind of attention that gets you caring fans or lands you gigs.

Vimeo faced a similar challenge, in the face of challenges from YouTube and Facebook’s own video push – each backed by big companies and revenue streams that the creator-focused, smaller company lacked.

What’s unique about Vimeo, under Kerry Trainor in particular, is that they found a way to compete by focusing on the creators uploading to the service rather than just the viewers watching it. While YouTube always tried to encourage uploads, its focus was on scale – and ultimately, the toolset was geared more for advertisers and watchers, and casual content creators, than for serious content makers.

Vimeo offers an alternative that serious uploaders like. Actual streaming quality is higher. The presentation is more focused on your content. There are powerful tools for controlling that presentation and collecting stats – if you’re willing to pay. And there’s not only greater intangible value to those serious uploaders, but greater tangible returns, too. It’s easier to sell your content – and, because there’s a collected community of pro users, easier to get audiences that support paying gigs.

Now, to do that in the face of YouTube’s scale, Vimeo had to make money. And that’s where Trainor did, by encouraging more of its creators to pay.

We already know SoundCloud’s plans to make listeners pay have fallen flat. So, as users have been clamoring for years, now is a chance to refocus on the creators.

I think anyone who knew Vimeo figured this was the best guess as the company’s new strategy the moment they saw Trainor and Weissman rumored to take over executive roles. And sure enough, in an exclusive talk with Billboard, Trainor says point blank that’s his strategy:

SoundCloud’s Pro and Pro Unlimited subscription services provide insights into which tracks are most popular and where. The Pro service, which costs $7 a month, provides basic stats such as play counts and likes, see plays by country, turn on or off public comments and upload up to six hours of audio. The Unlimited offering, for a $15 monthly fee, lifts the cap on the amount of music that can be uploaded and provides more specific analytics.

Trainor hopes to increase the number of creators who pay to use SoundCloud Unlimited’s service by adding an even more robust creative toolkit.

Emphasis mine. And reaction from users I’ve seen is, even a lot of die-hard SoundCloud enthusiasts in my early adopter social feed suggest people found reason to pay for Pro, but not Unlimited. Poor differentiation and stagnant offerings just gave little motivation.

That’s not to knock even SoundCloud’s rocket growth. On the contrary, it’s pretty tough to argue against sharing your sound on a site that’s one of the Internet’s biggest, with one of the world’s most popular mobile apps alongside. But now having grown to a huge audience, SoundCloud needs to fresh its tools for creators.

Translating from video to audio isn’t going to be easy. Part of the reason SoundCloud presumably didn’t push as hard on creator subscriptions is, there’s no clear indication what would make musicians pay for them. Audio is simpler than video – easier to encode, easier to share. Serving video on your own server is a nightmare, but serving audio isn’t. And, sorry to be blunt, but then there’s the issue of whether music producers really earn enough to want to blow cash on expensive subscriptions. Compare a motion graphics firm or design agency using Vimeo, who could make back a couple hundred bucks in subscription fees in, literally, an hour of work.

Even beyond that, I’m not clear what SoundCloud creators want from the service that they aren’t already getting. (Okay, Groups – but those probably aren’t coming back, and I don’t know that people would pay a subscription for them.) The toolchain out of the browser is already powerful and sophisticated, which has always made Web tools a bit less appealing – why use a browser-based mastering tool like Landr when you already have powerful mastering tools in your DAW, for instance? If you’ve invested enough money in gear and software to want to share a track to begin with, what will make you spend a few dollars a month for more?

That said, there’s clearly a passionate and motivated community of people making music. And note that the new talent at SoundCloud has music experience and interest as well as video. Trainor is evidently an avid guitarist (what, you’re not a fan of “Etro Anime,” his band?). He cut his teeth in tech in the area of music. (LAUNCH Media went from CD-ROM-taped-to-a-print-magazine to Internet radio offerings that look a lot like how we listen to music now.) And he’s currently on the board of Fender guitar.

Vimeo also had a long-standing interest in music and the music community in the company’s native New York City.

These are tough problems to solve. But I can think of few better people to tackle them. Basically, Alex and Eric not only saved their company for now, but seem to have gotten what they wanted in the process.

Also, it’s worth pointing out – the music business wants SoundCloud to live, not die. I think it would be unequivocally bad for musicians and labels, in fact, with independent and international artists feeling the worst impact. But it’s also worth noting Fred Davis tells Billboard: “If I could show to you the number of people who have been calling us, expressing fear about it going away, you would be shocked.”

It’s still possible investors will look to sell, but I suspect with the valuation at its low point and the tech world in general losing interest in music’s money-losing propositions and legal mess, independence is probably the safe bet.

If SoundCloud can turn this around, it’ll be a great example of a tech company humbling itself and successfully changing course.

We’ll be watching, and when this team settles in, hopefully will get to talk to the new team.

Background:
SoundCloud saved by emergency funding as CEO steps aside [TechCrunch]

SoundCloud Secures Significant Investment Led by The Raine Group and Temasek [SoundCloud press release]

Exciting news and the future of SoundCloud [Alex on the SoundCloud blog]

The post SoundCloud, now Vimeo of Sound, instead of YouTube of Sound? appeared first on CDM Create Digital Music.

Export to hardware, virtual pedals – this could be the future of effects

If your computer and a stompbox had a love child, MOD Duo would be it – a virtual effects environment that can load anything. And now, it does Max/MSP, too.

MOD Devices’ MOD Duo began its life as a Kickstarter campaign. The idea – turn computer software into a robust piece of hardware – wasn’t itself so new. Past dedicated audio computer efforts have come and gone. But it is genuinely possible in this industry to succeed where others have failed, by getting your timing right, and executing better. And the MOD Duo is starting to look like it does just that.

What the MOD Duo gives you is essentially a virtualized pedalboard where you can add effects at will. Set up the effects you want on your computer screen (in a Web browser), and even add new ones by shopping for sounds in a store. But then, get the reliability and physical form factor of hardware, by uploading them to the MOD Duo hardware. You can add additional footswitches and pedals if you want additional control.

Watch how that works:

For end users, it can stop there. But DIYers can go deeper with this as an open box. Under the hood, it’s running LV2 plug-ins, an open, Linux-centered plug-in format. If you’re a developer, you can create your own effects. If you like tinkering with hardware, you can build your own controllers, using an Arduino shield they made especially for the job.

And then, this week, the folks at Cycling ’74 take us on a special tour of integration with Max/MSP. It represents something many software patchers have dreamed of for a long time. In short, you can “export” your patches to the hardware, and run them standalone without your computer.

This says a lot about the future, beyond just the MOD Duo. The technology that allows Max/MSP to support the MOD Duo is gen~ code, a more platform-agnostic, portable core inside Max. This hints at a future when Max runs in all sorts of places – not just mobile, but other hardware, too. And that future was of interest both to Cycling ’74 and the CEO of Ableton, as revealed in our interview with the two of them.

Even broader than that, though, this could be a way of looking at what electronic music looks like after the computer. A lot of people assume that ditching laptops means going backwards. And sure enough, there has been a renewed interest in instruments and interfaces that recall tech from the 70s and 80s. That’s great, but – it doesn’t have to stop there.

The truth is, form factors and physical interactions that worked well on dedicated hardware may start to have more of the openness, flexibility, intelligence, and broad sonic canvas that computers did. It means, basically, it’s not that you’re ditching your computer for a modular, a stompbox, or a keyboard. It’s that those things start to act more like your computer.

Anyway, why wait for that to happen? Here’s one way it can happen now.

Darwin Grosse has a great walk-through of the MOD Duo and how it works, followed by how to get started with

The MOD Duo Ecosystem (an introduction to the MOD Duo)

Content You Need: The MOD Duo Package (into how to work with Max)

The post Export to hardware, virtual pedals – this could be the future of effects appeared first on CDM Create Digital Music.