Not much need be said about Apple’s elimination of the headphone jack. Yes, wired headphones remain a superior solution for some applications. But because Apple is shipping a Lightning-to-audio adapter in the box with the iPhone, this is a non-issue. After all, you’ve already kept track of 1/4″ to 1/8″ minijack adapters for all your studio headphones for years. (Okay, to be fair, by “keep track of” in my case I generally mean “lose,” but, uh… wait, what were we talking about again?)
There are certainly reasons for Apple to do this. The innards of an iPhone are crammed enough that something as seemingly innocuous as a minijack port take up a lot of space. They also have implications for Apple’s work on reliability and resistance to water and dust; it’s a point of failure. So you can see why they’ve done this (even if in my experience I’ve had more Lightning port issues and no headphone jack issues).
The more relevant issue is whether this impacts the way the device is used for audio – which for some of us is a big reason to buy the iPhone. I did a quick survey of the Lightning audio accessories I use around the CDM office, and all tend to include their own headphone jack for monitoring. That means that blocking the Lightning port with the headphone adapter isn’t an issue, because you won’t.
Indeed, the bigger problem with mobile gadgets is still the fact that very few accessories work with external audio. Once the Lightning port is blocked, you can’t simultaneously charge the device. Now, that does mean that you can’t listen to music, for example, while charging the iPhone, which I have done occasionally and might find annoying.
But that shouldn’t be cause to overstate the significance of the issue. In fact, there are more significant issues regarding Apple and the Lightning port about which even journalists like myself can’t really get information. Apple-certified accessories are subject to an arcane process of review by the company. Unlike something like USB-C, that process is also completely non-transparent. While they can’t go on the record, I have heard from accessory makers that they sometimes haven’t been able to do things they know we as serious audio producers might like, because Apple wanted something else. These conversations are protected by legal non-disclosure agreements, though, meaning we can’t even talk about them.
But I still say — move on, nothing to see here. If you think Apple’s headphones are poor for listening and overpriced (and you’re right, I think), you don’t have to buy them. There are some surprisingly good wireless headphones for times when you want to move around, or you can plug in better headphones as before to listen to that new track on the train.
Eliminating headphone jacks is something phones may generally start to do because of radical miniaturization and waterproofing. It doesn’t say much, really, about headphones in general – consumer and studio headphones are already very different categories.
No, what really matters, actually, on any operating system – Windows, Android, iOS, Mac – is generally what you can’t see, in the form of subtle changes to the parts of the OS that keep audio glitch-free. Or not.
Okay, enough teasing already. Behringer has a 12-voice polyphonic synth called the DeepMind. And now let’s talk about exactly what to expect, in one place.
The Behringer DeepMind 12 teaser campaign has gotten maddening – a trickle of videos and blog posts has introduced individual details one at a time. Yes, that even means getting as specific as just revealing the filter or going into some detail about why the name was chosen. Uli Behringer took over synth news in July online a bit like Donald Trump has dominated political news in the US – another day, another lead story.
But here’s the thing: the DeepMind 12 is impressive.
Pricing and availability
It’s a 12-voice analog synthesizer with a friendly looking front panel. And now, at last, we know the key detail – price. Behringer is projecting a price of US$999.99 retail, and shipping by the end of the year, but also projects heavy backorders.
There’s actually something I find a bit peculiar about that post. Uli Behringer says that pricing of their products is set only by a “slim” margin above component cost, and implies competitors are doing something else. Since he doesn’t talk about volume (but only “demand”), the implication is both that competitors are gouging customers on margin and/or making stuff they don’t want. I think that’s disingenuous, unless I’m missing something. What I would say about Behringer is that they do seem to have mastered higher volumes, and own a lot of their own manufacturing and supply chain. Those things will keep costs down, absolutely.
We’ve also gotten a lot of details here. Apparently the DM12 began as with the Juno-106 as inspiration, but experienced some significant feature creep.
And it’s got a lot of partners, too – including engineers from UK console maker MIDAS. Inside:
12 voices (the big feature, with various modes)
Four effects engines, powered by TC ELECTRONIC & KLARK TEKNIK.
A selectable 2- or 4-pole low-pass filter per voice, plus a shared high-pass filter, and modulation / envelope depth / key tracking
Gearnews.com has a nice write-up, too. I think it’s spot on that there may be a companion iPad app – and they note as well that you might want to be in Leeds to check it out in person on 20 August.
Some folks have already gotten up close and personal with the DM12. Starting with our friends and German neighbors at Amazona:
SonicState also get an exclusive in, visiting Studio Stekker:
I don’t remember the last time a synth keyboard has proven as divisive as the DM12 has this summer. One reason is the sheer amount of coverage it’s gotten. This has been a slow summer news-wise even by electronic musical instrument standards. So the Behringer has been this summer’s one big story. Other makers may be readying fall releases, but they’re refraining from teasing anything.
And into that vacuum has been a teaser campaign unlike any I’ve seen — multiple videos, coordinated leaks, flooding forums with posts, and working with multiple press outlets on exclusives. It’s easy to be dismissive of that, but any negative backlash there I think is outweighed by the extent to which this has built buzz. Expect at least one rival to copy some of the technique.
But then there’s the fact that this is Behringer. That has divided people on its own. Some are quick to defend low cost as a merit. Some have had positive experiences with Behringer products. Some really like the look of the DM12. And indeed, I wouldn’t dismiss the DM12 out of hand, not on quality of design or reliability. The synthesizer has behind it a very experienced and talented team. We’ll simply have to wait for the synth to become available to give it a fair review.
On the other hand, Behringer the brand has managed to accrue some bad karma. Let’s leave aside any question of quality; that can be hard to measure unless you’ve done extensive testing. (Retailers are better than this than press. Ask your local retailer what they see returned and what they don’t.) I’ve seen some people taking to social media to say they’ll never touch the DeepMind simply because of Behringer’s involvement in past questions of originality, with cases involving Mackie, Roland (BOSS), and Line 6. The DeepMind, by contrast, is clearly original, so this is really down to whether you think they are still answerable to past sins. Behringer has also left behind some disgruntled former employees, which you can review via glassdoor.de.
My sense is the real test is still the instrument. That’s the product of a company and its management, and originality and quality will show through – or not – depending on how well it’s executed.
And I’ll say this – the DeepMind 12 has already made 2016 more interesting. So, as summer comes to a close, here’s looking forward to finding out how the Behringer polysynth stands up once it’s available – and to whatever else we see in the increasingly busy synth landscape for the rest of the year.
Somewhere – tonight, even – some unknown producer is going to make some brilliant new track using software. (Seriously, this is the world we live in.) And when they do, odds are they might well turn to a popular synth like breakout-hit Serum. The problem is this: someone getting started in producing is probably unwilling or unable to shell out US$189 for a single software instrument. So that individual is likely to pirate the software.
We’ve known this for some time; what we haven’t had is much of a solution. And just how prevalent is piracy in our industry? Well, I’ve talked to plenty of people off-the-record in the software development industry who say they’ve done it (like the people whose bills are paid by software fees). Cakewalk founder and original developer Greg Hendershott has talked on the record about trading floppies.
And then there’s Serum. Plenty of high-profile producers have been caught using pirated plugs. But this probably takes the cake – someone found Kanye West with a browser tab opened to Serum on a torrent site, prompting swift condemnation by deadmau5 himself (plus an hilarious offer to set up a Kickstarter campaign to help him buy a license): Kanye West caught visiting Pirate Bay—possibly to download music software [Ars Electronica]
Big suites of software have already moved to a monthly paid subscription model – think Microsoft’s Office 365 or Adobe’s Creative Cloud. (That was easier, of course, in that those markets each have one dominant vendor.)
In music, so far Gobbler has offered subscription plans, with names like Eventide and Slate. This also offers a single unified back end with support for PACE copy protection. (Before that sends chills down your spine, “PACE” no longer necessarily has anything to do with physical dongles – more often these days you’ll just store your authenticated license online.)
Online music platform Splice offers something a bit different: pay-to-own. This way, instead of paying a subscription forever, you will eventually pay off the cost of the plug-in.
In fact, this is even better than a normal payment plan, in that you can switch off your subscription on months you don’t need it. So if you’re only going to get around to using Serum in September and October, but not November and December, you can opt not to pay for those months – then switch on the subscription again in January.
You can start out with a 3-day free trial, too, to see if you like the software. Either way, whenever the subscription is active, you have full access to the software. And after the equivalent number of months, you will have successfully bought the software.
Splice are launching this service now with Steve Duda’s Serum plug-in, and say it’s the first time this has ever been offered in music software. (I think that’s largely accurate, at least in this form.)
Serum is actually a significant choice of launch instrument.
There are a lot of software instruments out there, and many of them really terrific, but not so many hits. Serum is something special. Its production lineage is significant – creator Steve Duda is a rare electronic music genius and EDM production guru, collaborator with deadmau5 and developer through Xfer Records of a number of terrific plug-ins to boot.
Serum accordingly feels like a truly modern take on the software wavetable instrument, complete with loads of wavetable morphing and modulation features and built-in effects. And while the other instrument looming large in EDM production, Native Instruments’ Massive, hasn’t seen much in the way of updates since its introduction, Serum has a fresher take on how visual feedback and workflow could look in the interface. (That’s not a dig at Massive, necessarily — on the contrary, given Massive’s impact on EDM in particular, it’s remarkable that Serum has been one major instrument to successfully rival it.)
And as testament to the instrument’s online following, you’ll find loads of YouTube content on it. Here is some to get you going:
Probably the best is this series of tutorials with Steve himself (with nice insights into production whether or not you use Serum):
So, maybe Serum for ten bucks enough has you sold on the idea, and you don’t need much else.
But for pay-to-own to work as a model with Splice as provider, that online platform will have to do some legwork both to attract developers and to make users see the advantage of tying software payments to their service in particular. Otherwise, we could see still more fragmentation – with every developer offering their own plans separately, rather than showing up in a unified, App Store-style market.
Splice does have a case here. The service’s features effectively cover all bases. Their service backs up your projects. And it’s version control for yourself. And it’s a means to collaborating with others (with a Web interface that shows you what collaborators are up to).
Splice is also a community, with people not only collaborating with one another, but sharing stems and songs. And it’s a platform for finding loops, samples, and sound content. And it’s a store for plug-ins.
Now, that’s a lot of different stories to explain to people. On the other hand, Splice’s angle on putting all this together is summed up in one word: data. With people uploading actual project data, they can see plug-ins that are being used, which in turn lets them potentially offer a storefront full of plug-ins – and maybe rent-to-own plans – based on what people actually like.
For now, though, I think this may be the simplest next step: offer one really good, really popular plug-in, from an independent developer (who can’t necessarily roll their own plans with the same ease). And then see how it goes.
2016 is yet another year seeing big changes in the music streaming landscape. But whatever your feelings about them, there are reasons to closely watch – and even root for – SoundCloud. Of the major players, SoundCloud is the only service that lets you directly upload your own music primarily for the purpose of streaming, with playlists and other features.
Well, lately, SoundCloud has continued to change – the service taketh, the service giveth away.
New revenue could come to you
Let deal with licensing only in brief. Suffice to say, there’s a lot of action there. I’m surprised in some ways that producers view these changes as universally negative or as some kind of plot by SoundCloud to make more money. The point is, in our current legal regimen, copyright owners get to make the rules – that could well be your own label and/or publisher. (Indeed, some of the outrage I’ve seen from producers about an inability to upload their own music suggests that they haven’t read their own contracts.)
Country by country, SoundCloud is adding subscription services and advertising. Now, while anti-capitalist sorts may immediately assume this is bad, there are two reasons it might not be. For one, it pays to keep the lights on at a service you presumably like. (If you don’t care about it at all, feel free to ignore all of this, frankly! But if you like it, I would think you wouldn’t want it to run out of cash.) Second, I’ve talked to royalty collecting organizations, and in each country SoundCloud has rolled out the new revenue generating services, it has also given some of that share to copyright owners – that’s you, very possibly, provided you have membership in an organization like ASCAP, BMI, SESAC, PRI, GEMA, or the likes.
That’s not to let SoundCloud off the hook – they need to make sure these payments are fair, that they cover minors as well as majors, and that people understand how they work. And in some cases, SoundCloud is simply broken – if it falsely identifies music or if SoundCloud support isn’t responsive to customers, that’s on SoundCloud.
But let’s focus purely on functionality for the purposes of this article.
The SoundCloud angle
Here’s the challenge: SoundCloud has to compete with one electronics giant (Apple) and two ubiquitous services everyone is already using (YouTube, Spotify). There’s a lot of overlap in catalogs, too. SoundCloud has to be something different.
(Bandcamp is great, too, but I don’t view them as comparable – Bandcamp is the store over which you have the most control; SoundCloud is the streaming library. Bandcamp doesn’t have the playlist or discovery features SoundCloud does, period.)
What we’ve seen from SoundCloud is that they have a few ideas of how to do this. And the basic idea has been the same now for I think at least a couple of years – as I’ve understood it from them, and in terms of what I’ve seen them do.
1. Be better (or at least more personal) when it comes to discovery and suggestion, to make finding music you like easier.
2. Make use of music and sound content uploaded regularly from users – not just the big album releases, but sound content in general.
3. Build a larger audience of listeners, and use that to entice creators.
4. Integrate with mobile apps, both for creation and listening.
There have been some casualties in that evolution, particularly related to item #3. Here’s the thing: there are things SoundCloud does for you as a creator, and things SoundCloud does for listeners in general. Now, ideally, listener benefits will benefit you, creators – because you want your music to be heard by as many people as possible (by anyone, for that matter).
But that may still make you (and me) understandably unhappy if functionality you want disappears. For example, SoundCloud lagged especially when it came to mobile app tools for creators, and while that situation has gotten better over the course of this year, there’s still more to do. Also, I think creators rightfully can feel the functionality of the service is shrinking rather than growing for them, as SoundCloud streamlines the platform.
Let’s look at some recent functionality changes. First, the “taketh away” bit:
SoundCloud is taking away groups.
On paper, groups sound terrific. They let arbitrary numbers of users join a collective, share tracks from their own uploads, and gather together tracks from others. There are moderation features, and discussion features.
I heard from several group moderators who got the following news late last month:
We’re constantly looking for ways to make it easier for creators to share their work and connect with new fans. As well as adding new features and updates, we review existing features to see if they’re still beneficial to the community.
As we dug into the best ways for curators to connect with artists and fans, we found that Groups aren’t working as well as reposts, and curated playlists.
With that in mind, we’ve decided to phase out Groups on Monday, August 22nd to make room for future updates. Until then, you can collect, like or repost the content you would like to save, and connect with your fellow Group members.
As a Group moderator, we understand the following you’ve built by moderating submissions to your Groups — we suggest to keep that following going by creating a profile to curate. You can use Reposts and Playlists to share suitable tracks, and accept submissions via Messages.
We’d love to hear your thoughts on how we can continue to improve your experience on SoundCloud. Send your ideas and feedback by replying directly to this email.
The letter from SoundCloud already contains an acknowledgment that something is being lost here. The simple fact of the matter is, Groups are the only way on SoundCloud to allow people to freely add to a playlist. They’re also the only means of hosting a discussion on the site.
On the other hand, that also suggests why you might want to kill groups. Any playlist with open submissions is an invitation to spam. And while native discussion is a nice feature, it’s not so well integrated with other functionality on the site – and duplicates many, many other communication platforms elsewhere. (Hell, you could use an IRC channel if you really wanted to, kids.)
One of the best uses of Groups I’ve seen is moving through the transition without issue, and that’s the amazing collaborative Junto music- and sound-making group hosted by Disquiet and Marc Weidenbaum. Marc had already cleverly moved discussion to a well-organized group on chat platform Slack, and says he’ll simply adjust the workflow to playlists to account for the change. If this matters to you, it’s well worth reading his whole post:
Marc also shared this on my Facebook page (in a public discussion):
The Disquiet Junto (238 weeks straight, 5700 extant tracks, 650 active accounts, 2000+ members, 1000+ email-list subscribers, 100+ members of the Slack, really nice write-up recently in The Wire) was founded in January 2012 on the Groups interface. I was disappointed when the Discussions tab closed down, but then again there was a major upgrade when Playlists were able to collate tracks outside your own account, and in many ways Playlists are more central now to the Junto than the Groups functionality is. (Also, Reposts were a nice addition, though sometimes I’d like the option to mute them.) I wrote a bit about my thoughts this morning at the URL below. Short version: I think this is gonna especially hit hard those Groups that are focused on particular hardware or software. The Junto, I think less so. With the Junto, as long as folks can let me know they’ve uploaded a track, I can make a playlist of the project. I’ll learn as we proceed up to and after the August 22 transition.
Dylan Davis, meanwhile, moderator of a group dedicated to acid patterns (which can be reproduced on 303 hardware or emulations), had a less optimistic take. He describes groups as a “shared space” lost with the change to the service.
Still, while this may seem an anti-creator decision, prolonged discussion with people using groups found that by and large they were able to migrate to other solutions, including on SoundCloud.
Which brings us to the “giveth” territory.
You can add albums now
Making an “album” on SoundCloud is dead simple. You just take a playlist, then tick a box saying it’s an album (and enter release date and any other relevant metadata).
It’s small, but hugely useful. I know a lot of people have been confused by my own SoundCloud site, because they couldn’t quickly separate random sound content from actual album releases – meaning they couldn’t get a feel for who I was as a musician. (And I think I’m not alone.)
Also, it’s worth comparing just how easy this is relative to releasing music on Spotify or Apple Music. Only YouTube really compares, and that’s itself a bit absurd – it’s a video site, and not suited to the job. (Again, Bandcamp is great, but Bandcamp lacks playlists, or the ability to release audio that isn’t an album.)
Here’s how it works:
This also makes a difference when listening on mobile, because it’s easier to find album releases. And that’s a necessary prerequisite if people start to use SoundCloud in place of services like Spotify and Apple Music, to listen to actual releases and not just random mixes and whatnot.
That’s obviously partly there for majors, but you can use this, too. There’s no reason you can’t make your own releases and share this way. And there’s still the option to make a ‘buy’ link for downloads, and send people to Bandcamp or Beatport or the like.
Okay, to be fair – sometimes, recommendations are predictably awful. But give SoundCloud the right input in the form of a track or mix or album with particularly good keywords, and it can send you down a rabbit hole of discovering great new music. While I love the ability to follow labels and artists on Beatport and Bandcamp, and the ability to see what friends are buying (or getting via free download codes) on Bandcamp’s feed, I haven’t found anything quite as nice as SoundCloud’s recommendation engine. For one thing, not only is it sometimes uncannily good at making interesting connections between artists. And it’ll find really obscure stuff, because of the nature of content on the service.
Plus once you’re there, you can actually favorite tracks and send feedback, rather than just disappear into some metric.
The key to this working the way it does is also the way SoundCloud handles input. Spotify (using I believe a fair amount of data from The Echo Nest) is fairly clever about understanding how music itself is related. But what SoundCloud does is to analyze your plays and (most importantly) likes, to try to produce connections you’ll actually enjoy. And the company says the more you use the service, the better this machine learning gets.
SoundCloud has been putting this recommendation engine forward on both mobile and desktop. Here are some examples.
There’s now a Discover tab on the browser app, and a Suggested Tracks section alongside playbacks, plus equivalent sections on the iOS and Android mobile apps accessed via a magnifying glass. The nomenclature is confusing (Discover and Suggested Tracks give you the same choices via a different interface), but the feature is nice.
The supplements something you’ve already no doubt discovered, which is unexpected tracks playing after you finish playing a track you’ve chosen.
In July, we saw new recommendation functionality rolled into Stations (as seen on various services, most famously on Pandora back in the day). While earlier in 2016 Stations could be launched from a track, now they can be launched from an artist profile – though only in the mobile app, not the browser.
This appears to be a superset of what Suggested Tracks offers. As SoundCloud explains the difference between the two, “Stations serve a longer queue of songs that are a mixture of similar, new, and popular tracks related to the track or artist you started the Station from, for an experience closer to listening to the radio.”
This is licensing-related, too, in that a paid SoundCloud Go subscription expands the number of tracks available.
And you can combine that with the Collections feature already available.
I’ll be honest – I’m rooting for SoundCloud. I’m biased. And I don’t mean I’m biased toward SoundCloud the company, or like the color orange. The thing is, at the moment, there’s nothing with this size catalog, this size audience, this ability for a bedroom producer uploading their first ever track to live shoulder-to-shoulder to a Warner Music artist, this particular set of features – any of it. I’ve seen other tools, and while some of them are compelling, none does quite the same thing. That means if SoundCloud were to meet some demise or find itself under new ownership or leadership, we could really lose something.
That said, you’d do well to keep aware of other options, because there’s no guarantee any private service is going to last forever in precisely the way you’d like.
For now, though, I’ll say this: looking at my SoundCloud collection and the work I’ve been able to discover via the service, I’ve had a uniquely personal experience with the site that has let me get to know lots of artists around the world.
As in other industries, the UK referendum to leave Europe has sent shockwaves through the music community. My friends at Das Filter, a superb German-language online magazine about music and culture, wanted to respond. And so they invited a number of us to talk last week.
I’ve found myself awkwardly running my mouth about UK politics, which, quite frankly, is not something I am in any way qualified to do, in the way that I would be able to talk about things on the American side. So, British friends – accept my advance apologies, please, and I’m keen to hear your opinion.
What we are qualified to discuss, though, are two things: one, emotions (which was a clever way to open this discussion), and two, specific issues around immigration, migrant labor, and music.
I don’t want to suggest hegemony on this issue. I imagine there are CDM readers who voted Leave – and I do appreciate that there’s mistrust of the EU and the way it’s run, including some criticism which indeed ought to be aired. I would also caution that the Brexit is fundamentally difficult to discuss because the leave referendum is by design unclear on substance. Leave backers have been fragmented or have sent conflicting messages on what they’re asking for. The issues that matter most to us in the music community – like labor movement inside Europe – are very much undecided, even once the UK may trigger its departure from the EU agreement via article 50.
But some of these issues are supremely relevant to CDM – as we discuss here, the entire electronic music community is interwoven with Europe and its institutions. And it’s even directly relevant to CDM itself – I am personally a (business) immigrant to Germany and live in the European Union, the location in Berlin is made meaningful by a lot of elements of European integration, and we sell hardware products (MeeBlip) in the European market.
The whole audio is here if you’re so inclined. But in this panel and other discussions, some particular themes have come out.
There’s cause for deep concerned about racism, misunderstanding, and fear. This is the most important point. Whatever the merits or demerits of UK membership in the EU, there’s reason to be concerned about the rise of racial fears in Europe – not only the alarming rise of right wing extremism, but also the mainstreaming of racial prejudice. And this is something the music community ought to take personally. I would be nowhere personally without my friends and collaborators from Poland, from Romania, from the Middle East, and the list goes on. It’s absolutely heartbreaking to hear stories of racial bigotry or see some in the UK press and political campaigns target or caricature these groups. Ultimately, this isn’t a “Leave” issue or even a European one – confronting fear and hate, understanding where its origins lie and how to combat it, is a task for all of us.
Immigration, freedom of movement, and labor mobility are essential to music and music tech. Part of what makes Europe a dynamic place to live is the ability for those with European passports to live and work anywhere in the EU. It’s not clear whether or not the Brexit will exact a cost in this area, but in the meantime, it has created some real uncertainty – many Leave campaigners specifically criticized migrant labor from Europe, and those rights are back on the negotiating table.
There’s been a good deal of discussion in the music press about DJs traveling around and so on. But it’s also worth noting that there is a specific value working in music technology. This time last summer, I was on a panel with chief executives from both SoundCloud and Native Instruments as they praised the rich ability to recruit staff here in Berlin. NI had an official statement in support of the Remain vote speaking to this issue, but more than that, I also saw personal testimonies from various individuals, unofficially, talking about this. I’ve also had discussions with UK-based makers.
The talent pool from around Europe is part of the reason that exists in Berlin – and even for those of us who don’t have European passports, it has made it more appealing for us to live and work in Europe.And if you think of technology, specifically, I believe it’s even more important to assemble diverse teams. See the previous point. We all know that diversity and immigration are an investment in the economy. We’re not talking some sort of fevered neo-liberal banking dream – we’re talking about building actual jobs and making real things.
European integration has real benefits for small business – including music manufacturers. Europe is home to brands like Propellerhead, Ableton, Focusrite/Novation, Steinberg… the list goes on. It’s also a place where you can build new, small manufacturers, and rely on a European supply chain. So much has focused on the single currency, and while I know essentially nothing about macroeconomics or how currencies really operate, I do appreciate the criticism there. But I can say that it is meaningful for small gear makers that they can hire from around Europe without worrying about visas, move supplies without working about import tariffs, and rely (to some degree) on integrated regulations. (I say to some degree because I think a lot of us would like more European integration, not less. Entrepreneurs including SoundCloud’s Eric Wahlforss have recently advocated more standardized business regulations – that would have actually made it easier for CDM to incorporate and operate here, bu
This has also helped Europe drive forward recycling takeback programs and remove toxins and lead from electronics components – those are complex regulatory issues, but it means that those of us making music gear can have hopefully a slightly lighter impact on the planet.
European arts funding is significant to many of our projects. Funding from the European Commission is part of a number of projects I and CDM have been involved in, to say nothing of the many artists we cover. In fact, it’s safe to say that there’s a heck of a lot more European-level funding than there is federal public funding for the artist in the USA. This will necessarily be reshaped by a Brexit – though it could also see more EC funds directed to continental Europe. I’m not sure there’s a lot that can be said about the impact of the Brexit itself, but one side effect is that people are suddenly aware of something some of us have known for a while – the European Union as an institution does support the arts, even apart from the rich level of support enjoyed in countries like Germany.
The Brexit could set back shared regulations on copyright and the Internet. We didn’t get to this in the panel, really, but it’s a whole other topic. European cooperation on issues like copyright and the Internet also hold promise. Brexit or no Brexit, a UK that’s going it’s own way makes progress here difficult. (Progress here was, to be fair, difficult with the EU, too, but watch this space.)
But greetings from Germany anyway. Part of the reason I’m in Berlin is that, EU or no, policies here toward immigrants (including me with a non-European passport) have been in my experience moving in a positive direction. There’s a community that’s increasingly diverse, and German locals in the music scene have embraced international influence and cooperation – including those native to Berlin (or East and West Berlin). Those issues are ultimately global ones. But I’m grateful both to the community here and the government (for all the flaws of each) for providing such a terrific environment to work on those global issues. Of course, I know some people are saying that very backdrop – or cities like London – are a bubble, out of touch with growing anti-international sentiment around the world.
And I think far wider reaching than the Brexit itself, we need to find a way to talk about how issues like international cooperation, international law, and immigration impact people’s lives. In music, we already have a “nation” of globally-minded people – I see it everywhere I go. Now the question is whether we can act as effective ambassadors.
Not all independent music gear makers last. And so we’ve learned this week that Rane, the Seattle area-based company founded way back in 1981, will see new ownership with a buyout by giant inMusic (of Numark, M-Audio, Akai, and related).
That means, if nothing else, a transformed landscape for DJ mixers. At one end, you’ve got the big conglomerates – Japan’s Pioneer DJ, America’s inMusic. At the other, boutique makers are staking out increasingly specialized, low-quantity product. This sheds still more light on the significance of the new mixer from Richie Hawtin and Xone creator Andy Rigby-Jones.
With Rane out, indeed, the other big name standing is Allen & Heath, the company distributing the Hawtin/Rigby-Jones PLAYdifferently line as well as continuing the Xone – the one mixer you might be most likely to see rivaling Pioneer’s DJM.
But with Rane caught in the middle, one has to wonder if we won’t see a rich new market for boutique DJ mixers in the same way synth lovers have turned to Eurorack.
On a cynical note, there may be a fresh source of engineers from Rane left redundant.
It seems the folks running Rane wanted out. Co-founders Linda Arink and Dennis Bohn have announced they’re retiring after the acquisition.
But DJ news blog DJWORX suggests that many Rane personnel are unlikely to survive the acquisition, writing:
The majority of the 60+ workforce will be “permanently displaced” at the end of July.
Some engineers will remain (from the HAL/install side) in Seattle.
The DJ side of Rane will be absorbed into the Numark/Denon team at inMusic HQ.
Manufacturing will be moved to inMusic’s contractors in the far east.
I can’t verify that information; DJWORX only quotes unnamed sources. But it’s a fair bet that some degree of change at Rane is inevitable. There’s simply too much overlap with existing staff and IP at Numark and Denon not to suggest some changes, apart from the already-announced changes in leadership.
That’s sad news, of course. As far as what this means for DJs, I would think — not a lot. Serato is tightly integrated with Rane offerings, it’s true, but they’ve since expanded hardware compatibility. Native Instruments, of course, even makes their own mixers – and this news demonstrates in part why that’s not a bad idea. On the hardware side, I’ve seen the Rane rotaries make some headway in the booth, but the MP series are in a funny niche – neither as specialized or beloved as boutique rotary options, nor as mass-market as more-popular Allen & Heath and Pioneer mixers.
It’s safe to say it isn’t the easiest time to be a mid-sized hardware maker, though, with titans from Japan and the USA making big quantities at tiny margins with tight control of the supply chain. I think as a result a lot of independent makers may increasingly look up-market. And then, no matter what happens, at some point founders do want to retire.
Here’s hoping inMusic and Rane deliver on the glowing promises in the press releases.
Video killed the radio star. Streaming killed downloads. Home taping is killing music. Is the cloud about to kill the mastering engineer?
Landr, the instant online mastering service, already looked a bit that way. The drag-and-drop service lets you download a track that is algorithmically mastered – no humans directly involved. The service says those algorithms were carefully tweaked not only by DSP engineers, but actual mastering engineers. It isn’t like the “mastering” preset on a compression plug-in your DAW; according to the developers, the system is adaptive and learns from analysis by genre of music uploaded. And it covers a lot of processes – multi-band compression, EQ, stereo enhancement, limiting, and aural excitation, with some manual adjustment provided to the user.
Now, there are various reasons why I wouldn’t trade a human mastering engineer for this – even if Landr sometimes achieves good results. I really rely on a human mastering engineer as a final pair of ears. That person may be the one who finds mistakes, or who judges professionally just how loud a track ought to sound in the first place. (The very existence of the manual controls here more or less eliminates its utility to me.)
But maybe Landr is finding its own place – one in which a mastering engineer actually can’t compare. And that’s its unique ability to happen instantly right when you upload a file. Like Instagram filters and the auto-contrast on your digicam, spell-check and the location finder in your Google maps app, Landr’s instantaneous, automatic operation is the whole point.
Let’s be honest: you upload something quickly to SoundCloud, you want it to sound loud (and good, but especially loud) right away.
Landr now links directly to SoundCloud to make that happen. Connect your SoundCloud account, and either log into Landr or create a new account (requires just an email and name).
For me, at least, Landr gave me four free WAV downloads. I’m going to do some testing of that and get back to you. As for SoundCloud, you get unlimited free uploads “optimized” for that service. Since Landr is normally paid, that’s already a reasonable deal. The finished, mastered tracks are uploaded directly to your account.
The move may say as much about SoundCloud as it does about Landr or mastering. It’s clear the world’s leading sound upload service wants to continue to offer a complete solution for sharing noises. And while users panic about rumored changes to licensing or other hype (more on that in a separate story), there is some evidence that SoundCloud still has ideas for how to lure you to upload to their site specifically.
So, whither the mastering engineer? I don’t think so. Apart from the factors above, the mastering engineer’s service have already expanded from just sending you a stereo master, to being associated with digital distribution and vinyl cutting. Landr’s biggest competition may be not mastering engineers, but “turning up the knob on your compression plug-in” – and there, I think Landr has the edge.
But beyond that, pay attention to this one. It’s the latest evidence that the sharing of music online changes more than just how you listen. It does also change how you produce.
For all the changes in visual appearance, all the extra features and connections, what hasn’t changed much in headphones is how headphones work. That makes Nura, a product launching this week on Kickstarter, all the more interesting. Not only does it introduce a unique design for how the headphones physically deliver sound to your ears, but it’s also a pair of headphones that listens to your ears — even before you start listening to music.
One of the most fundamental things to know about human hearing is that all ears are different. You can give yourself some sense of this by playing with the flappy bits of your ears right now. (Don’t worry – I’m sure people around you won’t find it at all odd.) Move around your ears and you’ll notice sound changes – both your sense of the color and spatial location of what you hear will seem to change. That’s because your physical ear, from exterior to deep in the inner ear, produces a series of attenuations in frequency that impact what you hear.
In the world of analog sound, that meant that sound listening devices had to be made as generically as possible. From the sound produced itself to the physical form of devices like headphones, then, “personal” listening is really just a rough, lowest-common denominator approximation.
But we’re no longer in the world of exclusively analog sound. Thanks to computational technology, it’s possible to make “smarter” listening devices – headphones that automatically calibrate to your particular ears.
Headphones that listen
The Nura headphones do just that. Using an app on the smartphone to do the analysis, they automatically calibrate frequency range to your particular hearing. The headphones measure your hearing – on their own in conjunction with the app, with no intervention from you – in about half a minute.
This is possible because your inner ear, in addition to “listening” to sound, also actually emits very low-intensity sounds (explained in this medical article), both on its own and (essential here) in response to particular sounds as stimuli. What the Nura headphones are able to do is measure those emissions as a way of detecting the way your particular ear hears. They produce That’s been used in medical applications before, but this is the first time the same technique was used to produce better headphones as a consumer product.
So, you plug these in, hear some sweeping tones for 30 seconds, and then your headphones “know” how to make your music sound better – really.
Once the half-miute sensing process is complete, your personal profile is then stored with the headphones for the most accurate sound reproduction in your listening. It’s even specific, as it must be, to each ear. If you share your pair of Nura headphones with someone else and don’t re-calibrate, in other words, you should realize something akin to trading prescription eyeglasses with someone else – you’ll recognize that they don’t hear/see the way you do.
There are some unique applications for this. First off, by default, Nura headphones should sound better than other headphones do. (That explains at least in part why pros do monitor on both cans and studio monitors, or why no one entirely agrees on their favorite headphones.)
I was curious how professional engineers responded, too. That will require a more extensive test and review, but so far the makers of Nura say musicians and engineers have responded positively, and that they’ll continue to collaborate with them as they refine the design – which can include both the software/analysis side as well as the cans themselves.
This also means data collection on hearing directly from your listening device. That could eventually I imagine have implications for hearing health, adjusting to changes in hearing over time, and other applications.
Oh, one weird and interesting possibility: you could actually download a profile for the person who engineered a record, and hear through their ears.
The unique physical design combines in-ear and over-ear designs into a single form factor.
The self-calibration routine isn’t the only innovation of the Nura headphones. Physical design is also new. For the first time, the makers say (and the first time I’ve ever seen), the headphones use a dual driver design.
Basically, imagine that this is a combination of earbuds and over-ear headphones. There’s a driver that sticks into your ear for high and mid frequencies and the over-ear for lows. And that solves some familiar problems. In-ear and over-ear designs normally each have unique benefits, both in terms of the outside sounds they block and the sounds that you hear most clearly. Hear, you get both at once.
Since that also means more passive noise cancellation (like covering and plugging your ears at the same time), you should hear less outside noise, which means you can listen at lower volumes, which means less hearing damage from headphone listening.
There are some nice physical features, too, including gel-filled tips that the makers say conform to your ears. And they look fairly nice.
Connections are entirely digital – Lightning (for iOS) and USB (for Android and computers). There’s no analog connector; those digital connectors also provide the power necessary for the headphones to operate. I think a lot of us in the pro market would like an analog option (with some other power solution); I’ve asked about that.
I haven’t gotten to test these yet; that’ll happen here in Berlin next week when the team arrive with prototypes at Music Tech Fest – itself a compelling place to find out about new gear. I’m looking forward to that, though. Let us know if you’ve got questions for the makers or something you want me to evaluate in the process.
I do think this is the future. Nura covers frequency attenuation; it’s still for stereo signal. But you can bet that other sensing capabilities in headphones will also become a major selling feature, from health (sensors that work with the ear, like temperature or pulse) to spatialization (self-calibration becomes even more essential if you want to deliver realistic 360-degree sound to the ears).
Nura is the first to bring that kind of functionality to market in a music device. And that’s big news. So stay tuned for more.
The Kickstarter reached its $100,000 goal on the first day and continues to plow forward as users buy up early-bird specials on the headphones.
Yesterday was Piano Day – a day recently christened by composer/pianist Nils Frahm in order to celebrate that ubiquitous keyboard instrument. (It’s held on the eighty-eighth day of the month.) There are concerts, marathons, project, releases – and unlike Record Store Day, this event won’t clog the ability to produce piano music.
With that day as inspiration, I thought it was a good moment to look at some of the technology of and around the piano, to understand what has made this instrument special. That includes both strictly acoustic innovations as well as design features and breakthroughs that either inspired the electronic world, or helped bridge acoustic and digital.
The piano (and its organ counterparts) has had a tremendous impact on how we model musical information. The pianoforte, clavichord, and organ all helped produce a conceptual model where pitch could be abstracted from expression, and that’s been influential in digital and analog synthesis from the start.
But to begin with the piano’s influence, there’s only one place to start:
One of the oldest remaining instruments by the pianoforte's presumed inventor, Cristofori. Visit it in NYC – it’s at the Met (accession # 89.4.1219).
Hammers. What makes a piano a piano is really the hammer mechanism. “Piano” is of course short for “pianoforte”; it means the name of the instrument itself advertises its big selling point, the ability to play loud and soft.
There reason that’s a thing is that chamber keyboard instruments at the turn of the 18th century, when the piano first emerged, weren’t able to pull off much in the way of dynamic range. Harpsichords are loud enough, but their plucking mechanism is more or less binary – either you plucked, or you didn’t. Clavichords have levers that strike the strings with small pieces of metal, which allows some dynamic range, but they’re very soft. There’s a reason you see these in paintings in small home chambers. Bartolomeo Cristofori is credited with solving the problem on his pianoforte by first devising a hammer.
The old way: clavichords can produce some dynamic, but not much. By Enfo – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=29835325
The challenge: hit the string then, get out of the way again (so you don’t damp the sound – you can try this with your finger on a piano), and then get ready to do it all over again. That’s easy to do if you’re using a lever as on a clavichord, but much harder with a hammer. Cristofori got it right around the beginning of the 18th Century, and then others who researched his technique spread the concept. The 19th century then brought the invention of what’s called a double escapement action, which lets you hit multiple notes quickly by adding an additional lever. (You can see that working if you peer inside a piano and hit a note repeatedly.) The materials used on the hammer have been refined, too. But on a fundamental level, every piano has the basic mechanism that the Cristofori instruments did, making this a revolution in instrument building.
Wendy Powers has written a lovely overview of the Cristofori pianos for The Metropolitan Museum of Art in New York, which is also one of the best places to see his invention in person.
Pedals: this changes everything. Photo by Michael Pardo.
Pedals. The piano might have been a forgotten invention had it not been for a German organ builder named Gottfried Silbermann. Just as today, music tech ideas in the 18th Century spread through a combination of research, writing, and building, transmitted to others (and ideally translated to their native language). Silbermann applied his experience in building other instruments to producing variations on Cristofori’s fortepiano. And he added one essential addition: the sustain pedal. That has utterly changed the piano’s playing technique and tone. The hammer makes the piano possible; the pedal makes it more desirable. We take it for granted to such an extent that almost any electronic keyboard instrument you find of any size will have a sustain pedal jack on the back. Leave it to an organ builder to figure out that you can add more to a keyboard instrument by having the musician use his or her feet.
Wire and iron. If the Enlightenment gave us the idea of the piano, the Industrial Revolution in the 19th Century gave us the raucous, loud instrument we know best today. From a mechanical engineering perspective, the 19th Century piano is not completely unrelated to the Brooklyn Bridge. A cast iron frame supports wire strings under tremendous tension. Attach a tree (okay, something like a spruce soundboard), and you get an instrument like the enormous, very much not-portable Steinway D.
The Steinway frame can support up to 23 tons. And yes, if not cared for, those strings do sometimes break. The instrument quietly sitting in a living room is actually full of unseen forces.
Stringing. All the above give you the basics, but to me it’s the much subtler questions of stringing and the soundboard that actually make you fall in love with the piano. The use of strings that resonate when you play notes (another reason to use that sustain pedal) mean that each note contains not just the sound of a single string vibrating, but a rich resonant timbre around it.
Even if you know nothing about piano construction, you’ve probably admired the interlaced rows of strings inside the piano – and maybe even wondered why the pattern changes across registers. Cross-stringing (overlapping the strings) and the construction and disposition of the soundboard help define that unique character. By the end of the 19th Century, piano builders were perfecting techniques of Aliquot stringing, originated by Julius Blüthner in 1873.
These particular patterns are a bit part of what gives particular makes of piano their character. I don’t want to advertise Steinway & Sons specifically here. But, for example, when Steinway players speak lovingly of the “Steinway sustain,” part of what they’re talking about is the result of the tunable aliquots Theodore Steinway added to his instrument. Those strings produce a characteristic set of resonances in the higher octaves.
Player piano. Before the recording industry involved records, the piano was the original music industry. This was the technology that introduced the idea that you could recreate someone else’s performance in your own home. Barrel pianos, favored by street musicians, preceded the technology. But the player piano as we know it made its public debut at the 1876 Centennial International Exposition in Philadelphia. That was a heck of an expo, featuring the first monorail, and Alexander Graham Bell exhibiting his first telephone, opposite Thomas Edison showing off the telegram. And you could try Heinz ketchup for the first time.
The player piano as product arrives at the beginning of the 20th Century. Hilariously, early adopters got stuck with rolls that became instantly obsolete when later models upgraded to 88 keys.
The player piano or pianola is itself an excursion in piano history, one that’s fascinating but obsolete today. But it gives us three major ideas:
1. Reproducible performance. Alongside recordings, the player piano is the breakthrough technology that helps people to imagine they don’t need a human around to appreciate a performance. And it in turn helps popularize the idea of robotic musical performance many artists today are exploring.
2. Copyright law. Believe it or not, it’s actually the pianola and not the phonograph that established one of the most important battles around copyright law that’s still relevant today – even when we’re talking SoundCloud and Spotify. You’ll hear copyright lawyers today talk about “mechanical” royalties, and wonder why the heck they’d use that term. Well, the pianola is why. The US copyright law that ruled most of the 20th Century overturned an associated Supreme Court decision. At hand, the question was whether a piano roll (as a mechanical object) qualified as a “copy” (in the way that sheet music would). The Supreme Court said Congress’ definition was too narrow and would have denied composers royalties. Congress rewrote the law in 1909, and the compulsory mechanical royalty was born.
3. Visualization of music. The “piano roll” (still frustratingly called by that name) is by far the dominant visual model for representing music on computer devices. And it’s not a bad choice. The mechanical logic employed by the piano roll makes for an intuitive spatial, visual picture of what’s happening in the music. Furthermore, because the piano roll is aligned to the piano keyboard, and many musicians learn pitch from playing the piano keyboard, the image corresponds to muscle and visual memory.
One other note on the copyright law question – there’s a great quote from a circuit court decision (by one Judge Colt) quoted by the Supreme Court a hundred years ago. You can find it in the full text on FindLaw:
‘I cannot convince myself that these perforated strips of paper are copies of sheet music within the meaning of the copyright law. They are not made to be addressed to the eye as sheet music, but they form part of a machine. They are not designed to be used for such purposes as sheet music, nor do they in any sense occupy the same field as sheet music. They are a mechanical invention made for the sole purpose of performing tunes mechanically upon a musical instrument.’
The original Yamaha player piano – with actual discs. Photo courtesy Yamaha Corporation.
Digital piano interface. The piano and pianola were conceptual predecessors to everything that would happen in digital music. The instruments themselves, though, were a bit late to the party: the mechanical-acoustical world of the piano didn’t immediately interface with the analog or digital control schemes of the synthesizer.
That changed in 1982 when Yamaha released their first Player Piano (now called Disklavier. (That’s in reference to floppy discs; it was the 80s.) The instrument interfaces both input and output with the keys. For input, it uses electronic sensors, opening up the ability to record performances or transmit them to another instrument. For output, it uses electromechanical solenoids to move the keys, hammers, and pedals in place of your fingers.
Also worth mentioning is the Bob Moog Piano Bar. Whereas Yamaha’s solution required you buy an entirely new piano (good for Yamaha, possibly less ideal for you), the Piano Bar was non-invasive. Add the titular bar atop the keys, and sensors register as your fingers play. Moog joined in a historic collaboration with fellow synth pioneer Don Buchla to create the instrument. The product itself wasn’t a huge hit – better electronic pianos trumped heavier, pricier acoustic instruments in the market. But while an oddity, the Piano Bar deserves a place in music history for bringing together these minds.
Let’s science the s*** out of this, then. From a 2003 paper by Giordano/Jiang.
Physical modeling. All of these beautiful acoustic characteristics are something that electronic instruments don’t easily reproduce. Now, there are plenty of electronic pianos or “digital pianos” (Yamaha again being arguably the first commercial vendor). But these generally rely on simply sampling recordings of the instrument – and you can do that with any sound, so it doesn’t really count as a piano innovation
Alternatively, it’s possible to apply concepts of physical modeling to reproducing the very acoustic principles that give the piano its character. (You can read a 2003 paper on the subject to get nerdy. My favorite software piano instrument is Pianoteq, which is built around this principle.)
Physical modeling is a gift of the piano to the electronic world, because once you approximate the physics of an acoustic piano, you can then warp those rules to produce pianos that could never exist (or exist practically) in the real world.
The multi-touch keyboard. If you want a modern counterpart to the Cristofori fortepiano, I believe the Eaton-Moog Multi-Touch Sensitive Keyboard is a similarly important breakthrough. Now, technically, it’s not a piano – it’s an electronic keyboard controller. But just as Cristofori (and the clavichord) introduced the notion of velocity control, Bob Moog and John Eaton helped pioneer the idea of an electronic keyboard that would use an additional axis for more expression. This idea didn’t come out of the blue – the Martenot, for instance, layers expression atop a keyboard, as does an organ. But what makes the Moog-Eaton project special at this moment is that the idea seems destined to go mainstream. With instruments like the Haken Continuum and ROLI Seaboard taking up the mantle, and an effort to rewrite the MIDI specification to make it easier for hardware and software to communicate in this way, we seem destined to finally see this sort of expression reach a prime time audience.
Nils with Una Corda piano. Photo: Claudia Goedke.
And more innovation to come. You may have noticed that acoustic piano innovations reach a crescendo at the end of the 19th century, and then – they stop. The Steinway Model D, the one you’re likely to hear in a concert hall and one that has served as a template for most other brands, dates to 1884. The Bösendorfer Imperial, a “radical” outlier with 97 keys, is the young upstart — from 1900. That’s why someone like David Klavins is interesting. Klavins is a German piano builder who has rejected piano orthodoxy, creating instruments like a two-story piano accessible by staircase, or the beautiful, delicate Una Corda, with just one string per key.
And with physical models and new expressive interfaces at hand, I expect piano lovers will also find ways of translating inspiration from the piano to electronic and digital creations.
There’s no way to overstate this: digital music is what it is because of the piano.
There’s a reason “mobile music” has become synonymous with iOS. Apple has been unmatched in terms of how appealing they make their mobile platform to developers.
Today’s announcements are likely to be heavily covered by tech and Apple-focused sites, but we can cover the music angle pretty easily. It’s now possible to buy a new phone or tablet very cheaply with high-end performance capable of running demanding music apps. And that means the platform is likely to continue to attract both users and developers, in a continuous cycle.
An entry-level iPhone that’s just as powerful as the 6S flagship – that’s big news for developers. Photo courtesy Apple.
On the phone side, a 16GB iPhone 6SE starts at US$399, without a contract. (16GB is too small, so swap “$499” for that.) That in itself may not sound impressive, except for this: the SE has the same performance as a 6S. This is stuff that wasn’t even available in the top-of-range iPad up to fairly recently, and it now exceeds what was not so long ago laptop performance. That means that putting serious instruments and recording software on your phone is now easy to do even for the “budget” iPhone.
Think about that for a second. The “entry level” Apple iPhone now has exactly the same horsepower as the 6S flagship. And since that’s a phone Apple sells in big quantities, that means the installed user base with that horsepower will increase in a hurry. No Android maker is able to do that – and that’s even before the problems Android has with OS fragmentation and never-arriving OS updates.
Most likely, the people who were waiting to get an iPad Pro after seeing the first one where waiting for this.
The tablet side I expect is equally disruptive. The new 9.7″-display iPad Pro isn’t exactly cheap at US$599, but it is both more portable and more affordable than last year’s iPad Pro in a way that I expect will start to attract users.
I think that’s a big deal, because while the bigger iPad Pro was strictly a niche device (literally the only people I knew who bought one were iOS developers), this looks like a tablet that could be both mainstream and a primary music device.
I expect some would-be iPad Pro customers will wait for the bigger one to come down in price (and/or get the new features on the smaller model), but that’s also an inevitability.
It’s also significant that this hits the middle price range and works with a really nice pencil input device. That’s huge for anyone who works in notation: the combination of iOS with low-latency, accurate stylus input is a huge boon to songwriters, composers, and the like. A lot of us were dreaming of something like this since the very first time we laid eyes on a computer.
In other words: look out, laptop. While the laptop is likely to remain the workhorse machine for DAWs and finishing tracks, the iPad in general is growing in appeal as a powerful synthesizer, recording device, song-starter, and as a writing/theory/practice tool that can actually sit comfortably on a music stand or piano desk.
Oh, and, according to Apple’s promo video (pictured), the new iPad Pro will also let you jam by candlelight while adjusting the screen so you don’t have a blue glare. In fact, maybe solving the “blue glare” problem should itself make these things less unfriendly onstage (just in case you don’t want your face to look like it came out of a dystopian scifi movie).
Light some candles, make some music. Sounds like a nice evening in to us. Photo courtesy Apple.
Now, if you don’t need the Pencil (for scoring, in particular), the whole iPad range has in turn commoditized further. This means the baseline for performance is now higher. I expect this will mean a new generation of apps that push horsepower more than they have in the past, which can be relevant to more CPU-expensive effects and synths. And it also means you can run more apps together on one device, which is the other reason this trend may lead more people to try out production on the iPad.
Finally, plug in USB stuff and keep your iPad Pro powered all at once.
The best news in the announcement today, though, you could be forgiven for missing. The iPad Pro (both models) gets an accessory that’s called this:
Let me translate that into English for you. It’s a
“Lightning to USB audio and MIDI accessory adapter that doesn’t drain your battery while you use it because you can finally plug in a #$*(&ing power supply at the same time”
The “Camera” adapters for iOS are some of the most useful devices on the platform, because all sorts of audio and MIDI gadgets work with them. But since they use up the Lightning port, until now, you have to watch your battery when you used them. This fixes that – and it’s a reason to buy the iPad Pro already. We’ll have to do some research to see how much power it provides and what accessories work with it, but it’s good news.
The downside today, of course, is that Apple’s aggressive upgrade cycle can always spell trouble for people hanging on to older devices. Apple is quick to talk about how new their customers’ devices are and how they’ve achieved 80% adoption for their latest OS. But there’s a reason their environmental stance is about disposal – that does leave a lot of stuff behind. Fortunately, music developers have been a uniquely resourceful bunch and have done more than the rest of much of the App Store to support older devices. (We almost need a guide just to those apps – sounds like homework for CDM.)
One final note: I’m sad that music apps (apart from that fleeting GarageBand shot) are largely left out of the use cases for the iPad Pro. I suspect that there may even be some marketing numbers behind that; Apple leaves little to chance. Are there really more people doing 3D rendering than music? (Music was left out completely from the product pages.)
On the other hand, musicians are such a rabid bunch when it comes to Apple OS loyalty, it may simply be that Apple doesn’t need to do much to make the case to musicians – something people like me are doing right now.
But the bottom line is, free phones on contract and entry-level iPads costing less than $300 now give you high-end performance, run multiple music apps at once, and have powerful sound generation features previously only on laptops. And for a little more, you can replace your manuscript notebook. I’d call that news – and I’m still waiting to see enough people purchase any other tablet or phone to make a real competitive play for the Apple mobile juggernaut. (Though I do plan to try to get a Surface for a while to test, and know some curious iOS developers.)