analysis

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Native Instruments got a huge chunk of investment to grow

Big industry news last week: Native Instruments, purveyors of Traktor, Maschine, Reaktor, and Komplete, got 50 million Euros. Let’s make sense of that.

NI apparently wanted a reveal here. With Amsterdam Dance Event looking more like Pioneer turf these days – that company is dominant with CDJs and mixers and now even turntables, and had its own sampler on hand – NI got the attention of DJs at the keynote.

But what does it mean that the Berlin-based company got 50 million Euros? Well, some points to consider:

50 million is a lot. This is a lot for a company in the musical instruments sector of the business. Our quiet little corner of stuff for electronic musicians has begun to see some action, it’s true. For instance, Focusrite PLC (parent of Novation) made an initial public offering in 2014, and ROLI saw an unprecedented $27 million Series B funding back in 2016.

But 50 million euros opens up the possibility of significant investment. (Despite all that cash, NI retains its private ownership.)

The money is coming from a firm linked to music and pop stars. Billboard wrote the best piece I’ve seen on this yet:

Native Instruments Raises $59 Million From EMH Partners

So who are EMH? Well, they’re led by a bunch of white German guys in matching blue suits, who look like the people at the front of the queue for first class on your Lufthansa flight or an a Capella vocal quartet, or both.

But, apart from that, EMH are fairly interesting. You won’t gather much from their Website. (Example: they say they have “a special focus on consumer, retail, software, technology, financial services, business services, and health care.” That… doesn’t narrow it down much.)

Blue [Suit] Man Group. Why are these men smiling? Well, apparently their funds help digital services to grow – now including whatever NI are planning next.

I can translate, though. They help companies offering digital services grow. And they’ve got money to do it. The clients may shift – one of their previous big investments was in a tire e-service company (those round rubber things on cars), called Tirendo. And there was a search engine for vacation rentals. Plus a company with really futuristic lights.

NI were ahead of the curve on figuring out software would help musicians. They started simple – with things like a Hammond organ emulation and guitar effects. So now, it seems the gamble is what services would extend to larger groups of musicians.

NI will probably hire people. The one concrete piece of information: expect NI to hire new people to support new growth. So this is really about human investment.

NI already are established and successful. It’s also worth saying, NI aren’t a startup. They have not one, but multiple successful product lines. They’re established around the globe, in both software in hardware. They’re not getting investment because they’re burning cash and need to keep the lights on (cough, SoundCloud). This is money that can go directly into growth – without threatening the existing business.

So, about that growth —

What are they going to spend this on? This part is unclear, but you can bet on “services” for musicians, with musicians defined more broadly than the audience NI reaches now. This most important parts of the press release NI sent last week deal with that – and mention “breaking down the barriers to music creation.”

Over the past 12 months the company has made key hires in Berlin and Los Angeles, including the former CEO of Beatport, Matthew Adell. These specialized teams have commenced development of new digital services designed to redefine the landscape of music creation and the surrounding industry over the next year.

Here was my commentary on Adell at the time:

What does it mean that NI bought a startup that monetizes remixes?

Service – for what? Here’s the mystery: what will these services actually do?

It seems that the means of breaking down barriers – and playing on relationships with the likes of “Alicia Keys, Skrillex and Carl Cox” (mentioned in the press release) – is all about letting people remix music.

Of course, this makes yesterday’s news from ROLI seem a little desperate, as their initial remix offering just covers that earworm you finally got out of your head about a year ago, Parrell Williams’ “Happy.” NI have a significant headstart.

But it should also raise some red flags: that is, NI have the contacts, the brains, and the money, but what problem will they solve for music lovers, exactly? Dreams of growth do often hit up against simple realities of what consumers actually do turn out to want and what they want to pay for.

There’s not much in the Magic Eight Ball here now, though, so – let’s see what the actual plan is. (It could also be that this has nothing to do with remixes at all, and the value of Adell is unrelated to his previous gig in remix monetization.)

NI aren’t alone in services, either. Apart from Roland’s somewhat strange Cloud offering (which is mainly a subscription plug-in offering with some twists), Cakewalk now have something called Momentum – a subscription-based service and mobile/desktop combination that promises to take ideas captured on your phone and easily load them into your DAW.

What are these NI executives actually saying with these words?

Daniel Haver, CEO, isn’t helping here – he says the new target is “increasingly diverse market segments.”

Or, to translate, “like, a bunch more different people.” (Fair. There is demand from a bunch more of y’all people folks – and I’m not even kidding.)

Mate Gallic, the CTO/founder – and someone whose past life as an experimental electronic artist will be familiar to CDM readers – also has learned to speak corporate.

“We believe music creation products and services should be integrated in a more appealing, intuitive and cohesive way,” Mate Galic, CTO and President of Native Instruments, said in a statement. “We foresee an easily accessible music creation ecosystem that connects user centric design, with powerful technology and data, to further enable the music creators of today, and welcome the new creators of tomorrow.”

(Don’t worry, Mate and Daniel do talk like normal human beings outside of company press releases!)

Translation: they want to make stuff that works together, and it’ll use data. Also fair, though some concerns, Mate: part of what makes music technology beautiful is that the “ecosystem” doesn’t come from just one vendor, and some of it is intentionally left unintuitive and non-cohesive because people who make music find its anarchy appealing. You could also take the words above and wind up with a remix app that uploads to the cloud, a combination of Facebook and a DAW, or… well, almost anything.

So, they’ll be spending 50 million on a service that does something for people. Music people. Guess we have to wait and see. (Probably the one thing you can say is, “service” implies “subscription.” Everything is Netflix and Amazon Prime now, huh?)

The big challenge for the whole industry right now is: how do we reach more people without screwing up what we’ve already got? With new and emerging audiences, how do you work out what people want? How do you bridge what beginners want and need with what an existing (somewhat weird) specialized audience wants and needs?

For NI, of course, I’m sure all of us will watch to make sure that this supports, rather than distracts from, the tools we use regularly. (It’d certainly be nice to finally see a TRAKTOR overhaul, and I don’t know if there’s any connection of its fate to what we’re seeing here – very possibly not.)

I’ll be sure to share if I learn more, when the time is right. I am this company’s literal next-door neighbor.

The post Native Instruments got a huge chunk of investment to grow appeared first on CDM Create Digital Music.

Pioneer made a CDJ-shaped sampler – what does that mean for DJs?

Japanese giant Pioneer continue their march to expand from decks and mixers into live tools for DJs. The latest: a sampler in the form of the ubiquitous CDJ.

This isn’t Pioneer’s first sampler/production workstation. The TORAIZ SP-16 already staked out Pioneer’s territory there, with the same 4×4 grid and sampling functions. And the SP-16 is really great. I had one to test for a few weeks, and while these things are pricey and more limited in functionality than some of the competition from Elektron and Akai, they’re also terrifically simple to use, have great build quality, feature those lovely Dave Smith analog filters, and of course effortlessly sync to other Pioneer gear. So it’s easy for loyal owners of other gear to laugh off the pricey Pioneer entry. But its simplicity for certain markets is really an edge, not a demerit.

The DJS-1000 appears to pack essentially those same features into the form factor of a CDJ. And it lowers the price relative to the SP-16. (suggested retail is €1299 including VAT, so somewhere in the US$1200 range or less)

Features, in a nutshell. Most are copied directly from the SP-16:

  • 16 step color-coded input keys, step sequencer
  • Touch strip
  • 7-inch full-color touchscreen, with three screens – Home, Sequence, Mixer
  • Live sampling from inputs
  • FX: echo, reerb, filter, etc. (digital filter, not I think the Dave Smith analog filter on the SP-16)
  • MIDI clock sync, Beat Sync (with PRO DJ LINK for the latest CDJ/XDJ)

And it even loads project files right off the SP-16 – so you can make projects at home, then tote them to the club on USB stick. But there a couple of additions:

  • Tempo slider, nudge for turntable-style sync by hand
  • Form factor that sits alongside the CDJ-2000NXS2 and DJM-900NXS2
  • Support for DJS-TSP Project Creator2 – “easily create projects and SCENE3 files on a PC/Mac”

But… wait, would you actually want a sampler shaped like a CDJ?

There are a few benefits to borrowing the CDJ’s form factor. Of course, you elevate the controls to the same height as turntables and other CDJs, and tilt up the screen. (Those viewing angles were pretty good, but this is still easier to see in a booth. Oh, yeah – Pioneer’s slogan for this thing is even “elevate the standard,” which you can take two ways!)

That to me actually isn’t the most interesting feature, though. Adding a big tempo control means you can actually ignore that sync selling point and manually nudge drum patterns in tempo with other music. Now, that’s not to say that’s something people will do, but I’d love to see more manual controls for feel and tempo on machines. (Imagine tactile control over the components of a rhythm pattern, for instance. We’ve mostly envisioned a world in which our rhythmic machines are locked to one groove and left there; it doesn’t have to be that way.)

Giving DJs a familiar control layout (well, bits of it, anyway), is also a plus in certain markets.

But all of this is aimed at much at the club as it is the DJ. Looking at this thing, it’s apparent what Pioneer are hoping is that clubs begin to buy samplers alongside decks. That enclosure, apart from saving some costs for Pioneer through standardization, is a big advert that screams “you bought CDJs, now buy this.”

That leap isn’t inevitable, though. The form factor that makes the DJS-1000 smart for a club doesn’t necessarily make sense in the studio. I might buy a square SP-16 for less money, but … not a DJS-1000, because it’s now ungainly and big and completely absurd to travel with. A lot of DJs would buy a CDJ for their studio to practice on – but Pioneer doesn’t really make one that fits those DJs budgets. (Ironically, the DJs who could afford buying their own CDJs – the ones gigging all the time – have enough hours on the CDJ that I don’t know even one who has bought decks for themselves. I think they’re glad to have a vacation from the damned things.)

The fundamental question remains: will DJs actually start playing live or hybrid sets? The gamble Pioneer is making is “build it and they will come,” effectively.

I’ve tried to find out if the Toraiz range are having that impact. Certainly, some DJs are buying them. A lot of producers, are, too – particularly the lovely AS-1 synth, which holds its own with competing synths so well that you can easily forget the Pioneer logo is even there.

But there’s still a big, big divide between live acts and DJs. Most producers playing live will want to arrange their own gear. And once you’re playing live, even if you decide to play a hybrid set, you’re more likely to want to augment the live set with CDJs than to switch to Pioneer for samplers. You wouldn’t buy a DJS-1000, probably, given the whole Elektron range is as affordable or cheaper – the Digitakt is half the price of this, does more, and is more portable.

But if Pioneer isn’t selling to you, but to clubs, then you can figure the strategy is this:

1. Get SP-16 owners to bring a USB stick and plug into the DJS-1000 they find in clubs – that’s cool.

2. Get DJs preparing sampled sets on computers, then bringing them on USB sticks. That’s huge.

3. Get some DJs who haven’t worked much with samplers to toy around with the ones they find appearing in booths – the gateway drug effect.

#3 is more unpredictable; #1 and #2 aren’t. And don’t underestimate the power of Pioneer’s massive sales and marketing operation, which does extensive outreach to clubs and artists. That “industry standard” thing didn’t just happen accidentally.

Pioneer hopes clubs will invest in something like this press photo, of course.

I don’t think this means Pioneer will become an industry standard in live gear. But it does help them to expand beyond just decks, and ironically could help vendors like Elektron who are more live focused.

The real question isn’t for Pioneer, then: it’s for Native Instruments and Ableton. Home and studio use still seem to benefit from computer-software combinations. But the competition in live use is increasingly standalone hardware. We’ll see if the two Berlin software giants’ bet that people will still want to work with computers was a smart one — or if it means missed opportunities for Maschine and Live/Push. (TRAKTOR, for its part, has clearly lost ground. I’d love to see a TRAKTOR 3 that worked as portable standalone hardware, a deck combo you could take anywhere, but I’m not so optimistic.)

But I fully expect some of these DJS-1000s to start showing up in the nicer booths around.

The post Pioneer made a CDJ-shaped sampler – what does that mean for DJs? appeared first on CDM Create Digital Music.

SoundCloud, now Vimeo of Sound, instead of YouTube of Sound?

SoundCloud’s do-or-die moment came Friday – and it seems it’s do, not die. The company now takes on new executives, and a new direction.

First, it’s important to understand just what happened yesterday. Despite some unhinged and misleading blog reports, the situation didn’t involve the site suddenly switching off – following the layoffs, the company said it had enough cash to survive through the end of the fourth quarter. That said, the concern was, without reassurances the company could last past that, SoundCloud could easily have slipped into a death spiral, with its value dropping and top talent fleeing a sinking ship.

What happened: New investment stepped in, with a whopping US$169.5 million, for SoundCloud’s biggest round ever (series F). That follows big past investments from Twitter, early venture funding, and debt financing last year.

This gives the company a new direction, some new leadership and leadership experience, and the stability to keep current talent in the building.

Under new management

What changes: Plenty. When you invest that much money, you can get some changes from the company to ensure you’re more likely to get your investment back.

  • New CEO: Kerry Trainor (formerly CEO of Vimeo)
  • New COO: Mike Weissman (formerly COO of Vimeo)
  • New board members: Trainor joins the board, alongside Fred Davis (a star investor and music attorney), and Joe Puthenveetil (also music-focused), each coming from Raine (the firm that did the deal).
  • A much lower valuation: In order to secure funding, SoundCloud adjusted what had been at one point a $700 million valuation to a pre-investment $150 million. That’s not much above its annual run rate, and it indicates how far they’ve fallen.
  • …but maybe we don’t do this runway thing any more. The good news – TechCrunch reports the company says it has a $100 million annual run-rate. This investment means they’re not in urgent need of cash. They’ve bought themselves time to genuinely become a money making business, instead of constantly needing to go back to investors for money. (“Dad??? Can I borrow $70 million?”)

What stays the same:

  • SoundCloud as you know it keeps running. (Meaning, if you aren’t terribly interested in the business story here, carry on uploading and forget about it!)
  • Eric Wahlforss stays on. The co-founder’s title is adjusted to “Chief Product Officer” instead of CTO, but it appears he’ll retain a hands-on role. That’s important, too, because no one knows the product – or how it’s used by musicians – than Eric does. It’s easy to criticize the executive team, but if you’re a current user, this is good news. (Just bringing in some Vimeo people and dumping the people running the product would have almost been very bad for the service you use.)

Now, most headlines are focusing on the cash lifeline, and that’s absolutely vital. But this is a major talent injection, too. Fred Davis is one of the key figures in New York around music and tech, from his role as an attorney to as an investor. (He was known to float around hackdays, too.) Oh, yeah – he’s also the son of Clive Davis, who started NYU’s music business school. Puthenveetil also represents some significant expertise in the area.

Kerry Trainor is about the single most experienced person you could find to lead SoundCloud – more so, in fact, than the executives who have steered the company before. His streaming experience, as SoundCloud points out in their press release, spans back 20 years. (They leave out the names, because kids don’t like AOL, Yahoo Music, or Launch Media any more, but experience matters.) And he is largely credited with making Vimeo a profitable company.

What’s the future of SoundCloud now?

For all the skepticism, Alex seems to have delivered on exactly the promises he’s been making in past weeks, vague as they may have seemed. SoundCloud does appear ready to re-focus on creators, and the financing means ongoing independence is a real possibility.

Whether it works or not, it’s tough to overstate what a significant shift in direction this represents. For years, people have casually referred to SoundCloud as the “YouTube of audio.” (Oddly, the phrase I first wrote when they started was a “Flickr or audio,” which, uh, dates that story. But it does also indicate creators, not consumers, were initially the focus, so I at least go that bit right.)

It seems SoundCloud aren’t just bringing on former Vimeo executives. They seem poised to follow Vimeo’s example.

We already know that endlessly expanding scale and more streaming is a disastrous business model. The issue is, if listeners aren’t paying, and any royalties are accruing, the more people listen, the more money you lose. Spotify is facing that now and may need a similar change in direction, and the entire music industry is caught up in this black hole. Companies like Google and Apple can absorb the losses if they choose; an independent company can’t.

So scale alone isn’t the answer. And just having more listeners doesn’t necessarily mean the kind of attention that gets you caring fans or lands you gigs.

Vimeo faced a similar challenge, in the face of challenges from YouTube and Facebook’s own video push – each backed by big companies and revenue streams that the creator-focused, smaller company lacked.

What’s unique about Vimeo, under Kerry Trainor in particular, is that they found a way to compete by focusing on the creators uploading to the service rather than just the viewers watching it. While YouTube always tried to encourage uploads, its focus was on scale – and ultimately, the toolset was geared more for advertisers and watchers, and casual content creators, than for serious content makers.

Vimeo offers an alternative that serious uploaders like. Actual streaming quality is higher. The presentation is more focused on your content. There are powerful tools for controlling that presentation and collecting stats – if you’re willing to pay. And there’s not only greater intangible value to those serious uploaders, but greater tangible returns, too. It’s easier to sell your content – and, because there’s a collected community of pro users, easier to get audiences that support paying gigs.

Now, to do that in the face of YouTube’s scale, Vimeo had to make money. And that’s where Trainor did, by encouraging more of its creators to pay.

We already know SoundCloud’s plans to make listeners pay have fallen flat. So, as users have been clamoring for years, now is a chance to refocus on the creators.

I think anyone who knew Vimeo figured this was the best guess as the company’s new strategy the moment they saw Trainor and Weissman rumored to take over executive roles. And sure enough, in an exclusive talk with Billboard, Trainor says point blank that’s his strategy:

SoundCloud’s Pro and Pro Unlimited subscription services provide insights into which tracks are most popular and where. The Pro service, which costs $7 a month, provides basic stats such as play counts and likes, see plays by country, turn on or off public comments and upload up to six hours of audio. The Unlimited offering, for a $15 monthly fee, lifts the cap on the amount of music that can be uploaded and provides more specific analytics.

Trainor hopes to increase the number of creators who pay to use SoundCloud Unlimited’s service by adding an even more robust creative toolkit.

Emphasis mine. And reaction from users I’ve seen is, even a lot of die-hard SoundCloud enthusiasts in my early adopter social feed suggest people found reason to pay for Pro, but not Unlimited. Poor differentiation and stagnant offerings just gave little motivation.

That’s not to knock even SoundCloud’s rocket growth. On the contrary, it’s pretty tough to argue against sharing your sound on a site that’s one of the Internet’s biggest, with one of the world’s most popular mobile apps alongside. But now having grown to a huge audience, SoundCloud needs to fresh its tools for creators.

Translating from video to audio isn’t going to be easy. Part of the reason SoundCloud presumably didn’t push as hard on creator subscriptions is, there’s no clear indication what would make musicians pay for them. Audio is simpler than video – easier to encode, easier to share. Serving video on your own server is a nightmare, but serving audio isn’t. And, sorry to be blunt, but then there’s the issue of whether music producers really earn enough to want to blow cash on expensive subscriptions. Compare a motion graphics firm or design agency using Vimeo, who could make back a couple hundred bucks in subscription fees in, literally, an hour of work.

Even beyond that, I’m not clear what SoundCloud creators want from the service that they aren’t already getting. (Okay, Groups – but those probably aren’t coming back, and I don’t know that people would pay a subscription for them.) The toolchain out of the browser is already powerful and sophisticated, which has always made Web tools a bit less appealing – why use a browser-based mastering tool like Landr when you already have powerful mastering tools in your DAW, for instance? If you’ve invested enough money in gear and software to want to share a track to begin with, what will make you spend a few dollars a month for more?

That said, there’s clearly a passionate and motivated community of people making music. And note that the new talent at SoundCloud has music experience and interest as well as video. Trainor is evidently an avid guitarist (what, you’re not a fan of “Etro Anime,” his band?). He cut his teeth in tech in the area of music. (LAUNCH Media went from CD-ROM-taped-to-a-print-magazine to Internet radio offerings that look a lot like how we listen to music now.) And he’s currently on the board of Fender guitar.

Vimeo also had a long-standing interest in music and the music community in the company’s native New York City.

These are tough problems to solve. But I can think of few better people to tackle them. Basically, Alex and Eric not only saved their company for now, but seem to have gotten what they wanted in the process.

Also, it’s worth pointing out – the music business wants SoundCloud to live, not die. I think it would be unequivocally bad for musicians and labels, in fact, with independent and international artists feeling the worst impact. But it’s also worth noting Fred Davis tells Billboard: “If I could show to you the number of people who have been calling us, expressing fear about it going away, you would be shocked.”

It’s still possible investors will look to sell, but I suspect with the valuation at its low point and the tech world in general losing interest in music’s money-losing propositions and legal mess, independence is probably the safe bet.

If SoundCloud can turn this around, it’ll be a great example of a tech company humbling itself and successfully changing course.

We’ll be watching, and when this team settles in, hopefully will get to talk to the new team.

Background:
SoundCloud saved by emergency funding as CEO steps aside [TechCrunch]

SoundCloud Secures Significant Investment Led by The Raine Group and Temasek [SoundCloud press release]

Exciting news and the future of SoundCloud [Alex on the SoundCloud blog]

The post SoundCloud, now Vimeo of Sound, instead of YouTube of Sound? appeared first on CDM Create Digital Music.

Export to hardware, virtual pedals – this could be the future of effects

If your computer and a stompbox had a love child, MOD Duo would be it – a virtual effects environment that can load anything. And now, it does Max/MSP, too.

MOD Devices’ MOD Duo began its life as a Kickstarter campaign. The idea – turn computer software into a robust piece of hardware – wasn’t itself so new. Past dedicated audio computer efforts have come and gone. But it is genuinely possible in this industry to succeed where others have failed, by getting your timing right, and executing better. And the MOD Duo is starting to look like it does just that.

What the MOD Duo gives you is essentially a virtualized pedalboard where you can add effects at will. Set up the effects you want on your computer screen (in a Web browser), and even add new ones by shopping for sounds in a store. But then, get the reliability and physical form factor of hardware, by uploading them to the MOD Duo hardware. You can add additional footswitches and pedals if you want additional control.

Watch how that works:

For end users, it can stop there. But DIYers can go deeper with this as an open box. Under the hood, it’s running LV2 plug-ins, an open, Linux-centered plug-in format. If you’re a developer, you can create your own effects. If you like tinkering with hardware, you can build your own controllers, using an Arduino shield they made especially for the job.

And then, this week, the folks at Cycling ’74 take us on a special tour of integration with Max/MSP. It represents something many software patchers have dreamed of for a long time. In short, you can “export” your patches to the hardware, and run them standalone without your computer.

This says a lot about the future, beyond just the MOD Duo. The technology that allows Max/MSP to support the MOD Duo is gen~ code, a more platform-agnostic, portable core inside Max. This hints at a future when Max runs in all sorts of places – not just mobile, but other hardware, too. And that future was of interest both to Cycling ’74 and the CEO of Ableton, as revealed in our interview with the two of them.

Even broader than that, though, this could be a way of looking at what electronic music looks like after the computer. A lot of people assume that ditching laptops means going backwards. And sure enough, there has been a renewed interest in instruments and interfaces that recall tech from the 70s and 80s. That’s great, but – it doesn’t have to stop there.

The truth is, form factors and physical interactions that worked well on dedicated hardware may start to have more of the openness, flexibility, intelligence, and broad sonic canvas that computers did. It means, basically, it’s not that you’re ditching your computer for a modular, a stompbox, or a keyboard. It’s that those things start to act more like your computer.

Anyway, why wait for that to happen? Here’s one way it can happen now.

Darwin Grosse has a great walk-through of the MOD Duo and how it works, followed by how to get started with

The MOD Duo Ecosystem (an introduction to the MOD Duo)

Content You Need: The MOD Duo Package (into how to work with Max)

The post Export to hardware, virtual pedals – this could be the future of effects appeared first on CDM Create Digital Music.

Here’s how to download your own music from SoundCloud, just in case

SoundCloud’s financial turmoil has prompted users to consider, what would happen if the service were switched off? Would you lose some of your own music?

Frankly, we all should have been thinking about that sooner.

The reality is, with any cloud service, you’re trusting someone else with your data, and your ability to get at that data is dependent on a single login. You might well be the failure point, if you lock yourself out of your own account or if someone else compromises it.

There’s almost never a scenario, then, where it makes sense to have something you care about in just one place, no matter how secure that place is. Redundancy neatly saves you from having to plan for every contingency.

Okay, so … yeah, if you are then nervous about some music you care about being on SoundCloud and aren’t sure if it’s in fact backed up someplace else, you really should go grab it.

Here’s one open source tool (hosted on GitHub, too) that downloads music.
http://downloader.soundcloud.ruud.ninja/

A more generalized tool, for downloading from any site that has links with downloads:
http://jdownloader.org/

(DownThemAll, the Firefox add-on, also springs to mind.)

This tool moves to a new service – unattended – though I’m testing that now. (I do think backup, rather than migration, may be a good step.)
https://www.orfium.com/

Could someone create a public mirror of the service? Yes, though – it wouldn’t be cheap. Jason Scott (of Internet Archive fame) tweets that it could cost up to $2 million, based on the amount of data:

(Anybody want to call Martin Shkreli? No?)

My hope is that SoundCloud does survive independently. Any acquisition would likewise be crazy not to maintain users and content; that’s the whole unique value proposition of the service, and there’s still nothing else quite like it. (The fact that there’s nothing quite like it, though, may give you pause on a number of levels.)

My guess is that the number of CDM readers and creators is far from enough to overload a service built to stream to millions of users, so I feel reasonably safe endorsing this use. That said, of course, SoundClouders also read CDM, so they might choose to limit or slow API access. Let’s see.

My advice, though: do grab the stuff you hold dear. Put it on an easily accessible drive. And make sure the media folders on that drive also have an automated backup – I really like cloud backup services like Crashdrive and Backblaze (or, if you have a server, your own scripts). But the best backup plan is one that you set and forget, one you only have to think about when you need it, and one that will be there in that instance.

Let us know if you find a better workflow here.

Thanks to Tom Whitwell of Music thing for raising this and for the above open source tip.

I expect … this may generate some comments. Shoot.

The post Here’s how to download your own music from SoundCloud, just in case appeared first on CDM Create Digital Music.

With Japan’s latest Vocaloid characters, another song from the future

It’s a cyber-technological future you can live now: a plug-in using sophisticated samples and rules that can make a plug-in sing like a Japanese pop star.

Yamaha has announced this week the newest voices for Vocaloid, their virtual singing software. This time, the characters are drawn from a (PS Vita) Sony video game property:

The main characters of the PS Vita games Utagumi 575 and Miracle Girls Festival, as well as the anime Go! Go! 575, Azuki Masaoka (voice actress Yuka Ohtsubo), have finally been made into VOCALOID Voice Banks!

“Finally.”

Here’s what those new characters sound like:

And the announcement:

Announcing the debut of two new female Japanese VOCALOID4 Voice Banks

The packs themselves run about 9000 Yen, or roughly 80 US Dollars.

Perhaps this is an excuse to step back and consider what this is about, again. (Well, I’m taking it as one.)

To the extent that pop music is always about making a human more than real, Japan embraces a hyperreal artificiality in their music culture, so it’s not surprising technology would follow. Even given that, it seems the success of Yamaha’s Vocaloid software caught the developers by surprise, as the tool earned a massive fanbase. And while extreme AutoTune effects have fallen out of favor in the west, it seems Japan hasn’t lost its appetite for this unique sound – nor the cult following of aficionados that has grown outside the country.

Vocaloid isn’t really robotic – it uses extensive, detailed samples of a real human singer – but the software is capable of pulling and stretching those samples in ways that defy the laws of human performance. That is, this is to singing as the drum machine is to drumming.

That said, if you go out and buy a conventional vocal sample library, the identities of the singers is relatively disguised. Not so, a Vocaloid sample bank. The fictional character is detailed down to her height in centimeters, her backstory … even her blood type. (Okay, if you know the blood type of a real pop star, that’s a little creepy – but somehow I can imagine fans of these fictional characters gladly donating blood if called upon to do so.)

Lest this all seem to be fantasy, equal attention is paid to the voice actors and their resume.

And the there’s the software. Vocaloid is one of the most complex virtual instruments on the market. There’s specific integration with Cubase, obviously owing to Yamaha’s relationship to Steinberg, but also having to do with the level of editing required to get precise control over Vocaloid’s output. And it is uniquely Japanese: while Yamaha has attempted to ship western voices, Japanese users have told me the whole architecture of Vocaloid is tailored to the particular nuances of Japanese inflection and pitch. Vocaloid is musical because the Japanese language is musical in such a particular way.

All of this has given rise to a music subculture built around the software and vocal characters that live atop the platform. That naturally brings us to Hatsune Miku, a fictional singer personality for Vocaloid whose very name is based on the words for “future” and “sound.” She’s one of a number of characters that have grown out of Vocaloid, but has seen the greatest cultural impact both inside and outside Japan.

Of course, ponder that for a second: something that shipped as a sound library product has taken on an imagined life as a pop star. There’s not really any other precedent for that in the history of electronic music … so far. No one has done a spinoff webisode series about the Chorus 1 preset from the KORG M1. (Yet. Please. Make that happen. You know it needs to.)

Hatsune Miku has a fanbase. She’s done packed, projected virtual concerts, via the old Pepper’s Ghost illusion (don’t call it a hologram).

And you get things like this:

Though with Hatsune Miku alone (let alone Vocaloid generally), you can go down a long, long, long rabbit hole of YouTube videos showing extraordinary range of this phenomenon, as character and as instrumentation.

In a western-Japanese collaboration, LaTurbo Avedon, Laurel Halo, Darren Johnston, Mari Matsutoya and Martin Sulzer (and other contributors) built their own operetta/audiovisual performance around Hatsune Miku, premiered as a joint presentation of CTM Festival and Transmediale here in Berlin in 2016. (I had the fortune of sitting next to a cosplaying German math teacher, a grown man who had convincingly made himself a physical manifestation of her illustrated persona – she sat on the edge of her seat enraptured by the work.)

I was particularly struck by Laurel Halo’s adept composition for Hatsune Miku – in turns lyrical and angular, informed by singing idiom and riding imagined breath, but subtly exploiting the technology’s potential. Sprechstimme and prosody for robots. Of all the various CTM/Transmediale commissions, this is music I’d want to return to. And that speaks to possibilities yet unrealized in the age of the electronic voice. (Our whole field, indeed, owes its path to the vocoder, to Daisy Bell, to the projected vocal quality of a Theremin or the monophonic song of a Moog.)

“Be Here Now” mixed interviews and documentary footage with spectacle and song; some in the audience failed to appreciate that blend, seen before in works like the Steve Reich/Beryl Korot opera The Cave. And some Hatsune Miku fans on the Internet took offense to their character being used in a way removed from her usual context, even though the license attached to her character provides for reuse. But I think the music holds up – and I personally equally enjoy this pop deconstruction as I do the tunes racking up the YouTube hits. See what you think:

All of this makes me want to revisit the Vocaloid software – perhaps a parallel review with a Japanese colleague. (Let’s see who’s up for it.)

After all, there’s no more human expression than singing – and no more emotional connection to what a machine is than when it sings, too.

More on the software, with an explanation of how it works (and why you’d want it, or not):

https://www.vocaloid.com/en/vocal_synth/

The post With Japan’s latest Vocaloid characters, another song from the future appeared first on CDM Create Digital Music.

Berghain, by the numbers: data on the relentless Berlin techno club

In the era of fake news and big data for corporations, there’s an obvious antidote: getting actual data for yourself.

So, it’s a given that too many words have been spilt over Berlin’s Berghain. But in trying to portray the club’s hype or mystique, I notice that there’s not often much discussion of its consistency. And to understand how techno and in a broader sense electronic music and the various fashions about it are projected into the world, understanding that consistency is key. If a club is repeatedly pushing out long queues every Saturday and Sunday night (yes, Sunday), and if that is having the influence that Berghain does on bookings elsewhere, on musical aesthetics, and even on how people dress, then part of what you’re actually describing is consistency. These are all measures of repetition.

So, what are the actual numbers? Musician and developer Olle Holmberg aka Moon Wheel is a geek and coder as well as a musician. So, curiosity evidently led him to write a JavaScript app to crawl Berghain’s Website – from late 2009 to present.

You can check out that Google Doc. And of course someone could write a better script – or even try to do other analyses on other clubs.
Berghain — all sets 2009-2017 (data from berghain.de events pages) [berghain.de]

This isn’t revealing any secrets in the club. Quite the contrary: it’s taking public-facing information, and separating the reality from people’s perception.

Now, I’m not one to just say “hey, let’s post a story on Berghain to see if it works as clickbait.” I actually find the results interesting. One thing that particularly struck me about Berghain regulars was their tendency to swoon “oh my God, the lineup this weekend is amazing” – then go on to describe the residents playing on the program.

More analysis will require more work, but we can at least pull up the artists who play most often (and they do so by such a large margin that even minor bugs in the crawling/scripting won’t make so much difference).

The top 25 (from end of 2009, with some minor glitches possible as the program is crawled as plain text):

1. Boris 99
2. Sammy Dee 88
3. Norman Nodge 86
4. Zip 85
5. Marcel Dettmann 80
6. Fiedel 76
7. Ben Klock 75
8. nd_baumecker 73
9. Marcel Fengler 71
10. Len Faki 70
11. Steffi 68
12. Ryan Elliott 65
13. Tama Sumo 63
14. Nick Höppner 62
15. Margaret Dygas 58
16. Soundstream 49
17. Virginia 49
18. Answer Code Request 45
19. Dinky 42
20. Gerd Janson 41
21. Efdemin 40
22. Function 38
23. Kobosil 37
24. DVS1 35
25. Oliver Deutschmann 35

Major disclaimer: this is incomplete data. The opening years of the club are missing. Artists wanting to share their anniversary dates or more complete data or stories, of course, you’re welcome to.

Olle tells CDM that at least one or two people who have seen the numbers have already expressed interest in doing analysis on gender and measures of diversity.

I can at least eyeball these 25. In case you’re wondering, five out of those top twenty five are female, so we’re far from any gender parity even in one of the world’s more progressive big venues. The top of the list is also overwhelmingly white, although it’s also fairly German. (That says something about residents versus guests, of course – and about who is settling into Berlin for the long term. It’s not exclusively German. Dinky is from Santiago, Chile. DVS1 was born in Leningrad, USSR, but grew up in the USA. Boris cut his teeth in the scene with none other than Larry Levan in New York’s Paradise Garage.)

They’re also all there for a reason. The reason for the German representation is also a story about how the music scene in the country has grown up since the 90s, with many of these residents having made their mark in the labels and parties that helped define the scene since the fall of the Wall, whether Sammy Dee and the Perlon label or Ben Klock and Marcel Dettmann and the homegrown Ostgut label. These artists are German, but they tend to come from smaller towns in both east and west parts of the country.

Speaking of consistency and longevity and day jobs, Norman Nodge is even a lawyer.

So if there’s nothing surprising here, what is here is a metric of what is successfully unsurprising. (That also applies to the value many of these names have in booking. See also the Ostgut booking operation, who hilariously warn that they won’t offer table reservations. That’s hilarious because I’m sure someone is regularly writing and asking. I wonder where people imagine the tables are.)

If you scroll through the raw data, you’ll see more of the untold story of Berghain as the larger complex of event spaces and programs. As the Website publishes not only the club’s best-known too floors, Panorama and the titular Berghain, but also Laboratory, Halle am Berghain, and Kantine am Berghain (the former canteen of the power station), including various special events, you’ll get all sorts of names. (Mine even pops up a couple of times through those weird loopholes, without even me having involved North Korean hackers.) In recent weeks, that also includes a more leftfield program at the club’s new Säule space.

But there’s a deeper message, and it’s one about consistency and repetition. Part of what allows us to get your attention in the press is to try to pass off something as new. But behind the scenes, the other thing that press, bookers, publicists, clubs are all doing is actually about priming you to see certain ideas and certain people as important. And that’s in fact about repetition – reinforcing name recognition and making ideas.

So there’s something to that Sunday ritual. For better or for worse, if you look at the top names here, these are really the foundation of this Berghain effect.

This is, of course, just one club, even if a vital one. I think while numbers don’t tell a whole story, it’s great to have some actual data and do some real research. (And the data can be thought of as a first step, not a last.) So I hope, as with female:pressure‘s analysis of gender on festival lineups, we continue to gather data and use more than just our own limited perception to understand music scenes.

Google Spreadsheet

Oh yeah, and if anyone wants to crowd-source fitness tracker data to see how much you’re dancing, let us know!

Updated: In 2010, the club itself published more accurate statistics.

Of course, this article is completely boring to the resident DJs and anyone working for the club, as they have the numbers.

Berghain also archives their programs – which are uncommon for clubland, filled with art and photos but also extensive curatorial commentary and even sometimes poetry and other tidbits.

On the 11th December 2010, they shared some of their own (far more accurate) in-house stats – at which point the total events (from DJs to concerts) had already numbered a whopping 4774.

http://berghain.de/media/flyer/pdf/berghain-flyer-2010-12.pdf

Based on those stats, Boris was again the winner – then having played his 101st set.
Marcel Dettmann: 84.
Ben Klock: 80.
Prosumer: 77.
Cassy: 73.

Those numbers also tel you the missing first years are really significant. (If I read them correctly, it also means Berghain is less about the resident frequency than it once was, which would make some sense. But without the actual data set, that’s just a guess.)

Full details from the program (written in the usual, rather charming way, so I’ll include it for German speakers):

Wie uns unser Inhouse- Statistiker mitteilt, gab es bis einschließlich dem 11. Dezember insgesamt 4774 Auftritte im ganzen Gebäude, einschließlich aller DJ-Gigs, Live-Acts und Konzerte. Soweit die allgemeine Auswertung, aber kommen wir zum heutigen Abend. Konkurrenzloser Spitzenreiter aller zu unserem Geburtstag spielenden DJs (und wir nehmen an, auch insgesamt) ist Boris. Er spielt heute sein 101. Set. Und zwar unten. Tataaa! Ihm dicht auf den Fersen sind Marcel Dettmann mit 84 und Ben Klock mit 80 Kanzelbesuchen. Gewissermaßen schon auf der Überholspur spielen die beiden heute ein back2back Set in der Panorama Bar. Prosumer, ebenfalls oben, kommt auf 77 Sets, Cassy auf 73. Jetzt rattern die Zahlen steil nach unten. Für sein erst 16. Set kehrt Disko zurück. Er hat sich aber auch wirklich rar gemacht. Das Fünfte sicher gerade sein lassen wird Robert Hood – und zwar mit einem House-Set in der Panorama Bar, Nummer 6 gibt‘s sogar gleich danach mit einem Techno-Set im Berghain. Die bisherigen Gigs von .tobias, Chez Damier und DVS1 kann man an zwar an zwei Händen abzählen, aber spätestens hier merkt nun auch der Letzte, dass Statistik nicht zum Feiern taugt. Feste feiern eben, wie sie kommen. So sind Art Department gar zum ersten Mal bei uns und Shed gibt unten die Live-Premiere seines straighten Equalized-Alias

The post Berghain, by the numbers: data on the relentless Berlin techno club appeared first on CDM Create Digital Music.

Apple announces that they’re not ready to announce new pro hardware

Apple today summoned a handful of tech reporters to a product lab, essentially to announce that … they were between announcements.

Apple’s unusual PR experiment today was to mix mea culpa and product teaser, in a drawn out explanation of why their hardware wasn’t shipping. The result of this messaging technique: journalists in the room for the briefing dutifully recorded the agonizing details of how Apple sees its “pro” user base and how it prioritizes desktop functionality:

The Mac Pro is getting a major do-over [Mashable]
Apple pushes the reset button on the Mac Pro [TechCrunch]
The Mac Pro Lives [Daring Fireball, who at least added some more reflection]

Journalists not invited to the same briefing tended to go to an angle more like this:
Apple admits the Mac Pro was a mess [The Verge]

There are two questions here, though, as I see it.

Question one: what’s a pro user, anyway?

It’s easy to dump on Apple here, but one thing I will say is that they’ve historically understood the first question better than any of their competition. Gruber was actually the only writer who seemed to pick up on Apple’s intention there. And, frankly, the results were telling. One big revelation (if an unsurprising one): most Mac users aren’t pro users. Defining the percentage of Mac users who use apps for serious creation and software development as pros at least once a week, Apple found only 30% of users count. For more regular use, that number drops to 15%. And notebook computers (MacBook) dominate both that pro market and the overall Mac user base, at 80% (I think that’s by revenue, not number).

Catering to slivers of that group can’t be easy. When users talk about “pros,” what they really mean is themselves, individually. And that market is full of endless variation.

CDM readers are routinely doing far more specialized things, like virtual reality experiments or live visuals or running 3D game engines onstage or programming robotic drum ensembles. That may sound extreme to even cite as an example, but remember that over the years Apple Computer (under Jobs but also under other CEOs) did sometimes refer to exactly those kinds of weird edge cases in, you know, expensive TV ads. In fact, today, you still see edge cases cited in iOS ads.

Question two: what hardware do you make for that user?

If pro users are by a definition an edge case, and desktop a subset of that, and advanced desktop another slice, we’re talking ever-smaller bit. It’s not totally clear what Apple sees as important to that group, actually – and it’s even murkier what they intend to do. Here’s what Apple did clearly say publicly, though it was more about what they aren’t doing than what they are:

What they aren’t doing:
They’re not shipping new iMacs until later this year.
They’re not shipping a new Mac Pro in 2017.
They’re not shipping a new dedicated display in 2017.
They’re not shipping a largescreen dedicated touchscreen or a product like the Surface Studio, and they say the Mac Pro user they’re targeting isn’t interested in that.

What they will be doing in the future:
There will be a new iMac this year, and it will cater to pro specs.
There will be some kind of ground-up redesign of the Mac Pro, and it will be “modular” (which I could interpret from context only as meaning there’s no integrated display).
There will be a display to go with it.

What they didn’t entirely rule out:
Federighi followed up ruling out touch for the Mac Pro user by mentioning a “two-prong desktop strategy with both iMac and Mac Pro.” (I wouldn’t interpret that as a promise of a touch iMac, but it did seem to leave the door open. Then again, he also was responding to the question of the Microsoft Surface Studio, which seems a lot like what a touch iMac would be.)

What they’re shipping right now:
There’s a new Mac Pro configuration. You won’t want it, though, as it only swaps a new CPU and GPU config for the existing model – so you’re still stuck without modern ports (Thunderbolt 3, USB-C). It’s also bloody expensive:

US$2,999 now buys you a 6-core Intel Xeon processor, dual AMD FirePro D500 GPUs and 16GB of memory. That’s £2,999.00 (UK)/ €3,399 (Germany).

US$3,999 gets you an 8-core processor and dual D700 GPUs. £3,899.00 (UK) / € 4,599.00 (Germany).

Each of those has 256GB of internal storage. It does not include a mouse, keyboard, or display. Memory, storage, and graphics are upgradeable options, but they’re expensive — the base model with 32GB of RAM and 1TB of internal storage will run you US$3,999. (Maximum is 64GB of RAM, 1TB of SSD.)

Those are middle-of-the-road CPU and GPU specs, too, given what’s now available in desktop factors in larger form factors.

What did we learn?

Uh… nothing? Well, we learned that Apple isn’t eliminating the iMac or the Mac Pro. We just have no idea what they’ll look like.

Look, I’ll be honest: this is weird. Apple has a decades-long record, under multiple different leadership teams, that demonstrate the importance of letting shipping products do the talking rather than future products, and focusing on user stories over specs. Today feels a bit like there was a transporter accident and we a reverse-universe Apple that did the opposite.

The only thing missing was Tim Cook showing up with a beard.

Windows I think has some opportunities here – not least because Apple for some reason decided to make headline news of its own shortcomings rather than its strengths. In theory, the Windows PC ecosystem has always been better positioned to cater to specific edge cases through hardware variety, and things like music and motion qualify. In practice, though, it’s down to whoever delivers the best user experience and overall value.

If Windows continues to improve the OS experience and offer competitive hardware options, I don’t doubt that we’ll see some re-balancing of the OSes used by creative users.

This is nothing new; we’ve seen regular oscillations between platforms for decades. But I think the next months will be revealing; you compete with what you’re shipping, and PC makers keep shipping new stuff while Apple isn’t.

The post Apple announces that they’re not ready to announce new pro hardware appeared first on CDM Create Digital Music.

The first generation of CDs is already rotting and dying

Digital media is a double-edged sword. Digital data itself can be duplicated an unlimited number of times without any generational loss – meaning it can theoretically last forever. But digital storage on physical media is subject to failure – and that failure can render the data inaccessible. In other words, archivists (including you) have to transfer data before the media fails.

And we’re already entering an age when one of the most popular formats is reaching the start point for common failures.

A report by Tedium (republished by Motherboard) demonstrates one of the most alarming failures. Some media, evidently using faulty dyes, can fail in under ten years, via something unpleasantly dubbed “disc rot.”

The Hidden Phenomenon That Could Ruin Your Old Discs

At issue is the fact that optical media uses a combination of different chemicals and manufacturing processes. That means that while the data storage and basic manufacturing of a disc are standardized, the particulars of how it was fabricated aren’t. Particular makes and particular batches are subject to different aging characteristics. And with some of these failures occurring in less than ten years, we’re finding out just how susceptible discs are outside of lab test conditions.

In short, these flaws appear to be fairly widespread.

That just deals with a particular early failure, however. In general, CD formats start to fail in significant numbers inside 20 years – on average, not just including these rot-prone flawed media.

What’s tough about this is that the lifespan can be really unpredictable. Before you dismiss the CD as a flawed storage format, many discs do reach a ridiculously long lifespan. The problem is really the variability.

To get an accurate picture, you need to study a big collection of different discs from a lot of different sources. Enter the United States of America’s Library of Congress, who have just that. In 2009, they did an exhaustive study of disc life in their collection – and found at least some discs will be usable in the 28th Century (seriously). The research is pretty scientific, but here’s an important conclusion:

The mean lifetime for the disc population as a whole was calculated to be 776 years for the discs used in this study. As demonstrated in the histograms in Figures 18 and 19, that lifetime could be less than 25 years for some discs, up to 500 years for others, and even longer.

COMPACT DISC SERVICE LIFE: AN INVESTIGATION OF THE ESTIMATED SERVICE LIFE OF PRERECORDED COMPACT DISCS (CD-ROM) [PDF, Preservation Directorate, Library of Congress]

Other research found failures around 20-25 years. That explains why we’re hearing about this problem round about now – the CD format was unveiled in 1982, and by the 90s we all had a variety of optical disc storage to deal with.

There are two takeaways – one is obviously duplicating vital information on a regular basis. The other, perhaps more important solution, is better storage. The Library of Congress found that even CDs at the low end of life expectancy (like 25 years) could improve that lifespan by twenty five times if stored at 5 degrees C (41 degrees F) and 30% relative humidity. So, better put that vital collectors’ DVD in the fridge, it seems. That means instead of your year-2000 disc failing in 2025, it fails in the 27th Century. (I hear we have warp-capable starships long before then.)

But anyone using discs for backup and storage on their own should take this even more seriously, because numerous studies find that writeable CD media – as we purchased with optical drives in the 90s – are even more susceptible to failure.

There are many other issues around CDs, including scratch and wear. See this nice overview, with some do’s and don’ts:

CD and DVD Lifetime and Maintenance [wow, 2007 Blogger!]

Or more:
CDs Are Not Forever: The Truth About CD/DVD Longevity, “Mold” & “Rot” [makeuseof]

I’ve seen some people comment that this is a reason to use vinyl. But that misses the point. For music, analog storage media still are at a disadvantage. They still suffer from physical degradation, and reasonably quickly. For digital media, hard disc failures are even more frequent than CDs (think under three years in many cases), and network-based storage with backups more or less eliminates the problems of aging generally, in that data is always kept in at least two places.

The failure of CDs seems to be more of a case of marketing getting divorced from science. We’re never free of the constraints of the physical world. As an archivist will tell you, we have to simple adapt – from duplication to climate control.

But I’d say generally, with network-connected storage and automation, digital preservation is now better than ever. The failure point is humans; if you think about this stuff, you can solve it.

The post The first generation of CDs is already rotting and dying appeared first on CDM Create Digital Music.