analysis

Here’s how to download your own music from SoundCloud, just in case

SoundCloud’s financial turmoil has prompted users to consider, what would happen if the service were switched off? Would you lose some of your own music?

Frankly, we all should have been thinking about that sooner.

The reality is, with any cloud service, you’re trusting someone else with your data, and your ability to get at that data is dependent on a single login. You might well be the failure point, if you lock yourself out of your own account or if someone else compromises it.

There’s almost never a scenario, then, where it makes sense to have something you care about in just one place, no matter how secure that place is. Redundancy neatly saves you from having to plan for every contingency.

Okay, so … yeah, if you are then nervous about some music you care about being on SoundCloud and aren’t sure if it’s in fact backed up someplace else, you really should go grab it.

Here’s one open source tool (hosted on GitHub, too) that downloads music.
http://downloader.soundcloud.ruud.ninja/

A more generalized tool, for downloading from any site that has links with downloads:
http://jdownloader.org/

(DownThemAll, the Firefox add-on, also springs to mind.)

This tool moves to a new service – unattended – though I’m testing that now. (I do think backup, rather than migration, may be a good step.)
https://www.orfium.com/

Could someone create a public mirror of the service? Yes, though – it wouldn’t be cheap. Jason Scott (of Internet Archive fame) tweets that it could cost up to $2 million, based on the amount of data:

(Anybody want to call Martin Shkreli? No?)

My hope is that SoundCloud does survive independently. Any acquisition would likewise be crazy not to maintain users and content; that’s the whole unique value proposition of the service, and there’s still nothing else quite like it. (The fact that there’s nothing quite like it, though, may give you pause on a number of levels.)

My guess is that the number of CDM readers and creators is far from enough to overload a service built to stream to millions of users, so I feel reasonably safe endorsing this use. That said, of course, SoundClouders also read CDM, so they might choose to limit or slow API access. Let’s see.

My advice, though: do grab the stuff you hold dear. Put it on an easily accessible drive. And make sure the media folders on that drive also have an automated backup – I really like cloud backup services like Crashdrive and Backblaze (or, if you have a server, your own scripts). But the best backup plan is one that you set and forget, one you only have to think about when you need it, and one that will be there in that instance.

Let us know if you find a better workflow here.

Thanks to Tom Whitwell of Music thing for raising this and for the above open source tip.

I expect … this may generate some comments. Shoot.

The post Here’s how to download your own music from SoundCloud, just in case appeared first on CDM Create Digital Music.

With Japan’s latest Vocaloid characters, another song from the future

It’s a cyber-technological future you can live now: a plug-in using sophisticated samples and rules that can make a plug-in sing like a Japanese pop star.

Yamaha has announced this week the newest voices for Vocaloid, their virtual singing software. This time, the characters are drawn from a (PS Vita) Sony video game property:

The main characters of the PS Vita games Utagumi 575 and Miracle Girls Festival, as well as the anime Go! Go! 575, Azuki Masaoka (voice actress Yuka Ohtsubo), have finally been made into VOCALOID Voice Banks!

“Finally.”

Here’s what those new characters sound like:

And the announcement:

Announcing the debut of two new female Japanese VOCALOID4 Voice Banks

The packs themselves run about 9000 Yen, or roughly 80 US Dollars.

Perhaps this is an excuse to step back and consider what this is about, again. (Well, I’m taking it as one.)

To the extent that pop music is always about making a human more than real, Japan embraces a hyperreal artificiality in their music culture, so it’s not surprising technology would follow. Even given that, it seems the success of Yamaha’s Vocaloid software caught the developers by surprise, as the tool earned a massive fanbase. And while extreme AutoTune effects have fallen out of favor in the west, it seems Japan hasn’t lost its appetite for this unique sound – nor the cult following of aficionados that has grown outside the country.

Vocaloid isn’t really robotic – it uses extensive, detailed samples of a real human singer – but the software is capable of pulling and stretching those samples in ways that defy the laws of human performance. That is, this is to singing as the drum machine is to drumming.

That said, if you go out and buy a conventional vocal sample library, the identities of the singers is relatively disguised. Not so, a Vocaloid sample bank. The fictional character is detailed down to her height in centimeters, her backstory … even her blood type. (Okay, if you know the blood type of a real pop star, that’s a little creepy – but somehow I can imagine fans of these fictional characters gladly donating blood if called upon to do so.)

Lest this all seem to be fantasy, equal attention is paid to the voice actors and their resume.

And the there’s the software. Vocaloid is one of the most complex virtual instruments on the market. There’s specific integration with Cubase, obviously owing to Yamaha’s relationship to Steinberg, but also having to do with the level of editing required to get precise control over Vocaloid’s output. And it is uniquely Japanese: while Yamaha has attempted to ship western voices, Japanese users have told me the whole architecture of Vocaloid is tailored to the particular nuances of Japanese inflection and pitch. Vocaloid is musical because the Japanese language is musical in such a particular way.

All of this has given rise to a music subculture built around the software and vocal characters that live atop the platform. That naturally brings us to Hatsune Miku, a fictional singer personality for Vocaloid whose very name is based on the words for “future” and “sound.” She’s one of a number of characters that have grown out of Vocaloid, but has seen the greatest cultural impact both inside and outside Japan.

Of course, ponder that for a second: something that shipped as a sound library product has taken on an imagined life as a pop star. There’s not really any other precedent for that in the history of electronic music … so far. No one has done a spinoff webisode series about the Chorus 1 preset from the KORG M1. (Yet. Please. Make that happen. You know it needs to.)

Hatsune Miku has a fanbase. She’s done packed, projected virtual concerts, via the old Pepper’s Ghost illusion (don’t call it a hologram).

And you get things like this:

Though with Hatsune Miku alone (let alone Vocaloid generally), you can go down a long, long, long rabbit hole of YouTube videos showing extraordinary range of this phenomenon, as character and as instrumentation.

In a western-Japanese collaboration, LaTurbo Avedon, Laurel Halo, Darren Johnston, Mari Matsutoya and Martin Sulzer (and other contributors) built their own operetta/audiovisual performance around Hatsune Miku, premiered as a joint presentation of CTM Festival and Transmediale here in Berlin in 2016. (I had the fortune of sitting next to a cosplaying German math teacher, a grown man who had convincingly made himself a physical manifestation of her illustrated persona – she sat on the edge of her seat enraptured by the work.)

I was particularly struck by Laurel Halo’s adept composition for Hatsune Miku – in turns lyrical and angular, informed by singing idiom and riding imagined breath, but subtly exploiting the technology’s potential. Sprechstimme and prosody for robots. Of all the various CTM/Transmediale commissions, this is music I’d want to return to. And that speaks to possibilities yet unrealized in the age of the electronic voice. (Our whole field, indeed, owes its path to the vocoder, to Daisy Bell, to the projected vocal quality of a Theremin or the monophonic song of a Moog.)

“Be Here Now” mixed interviews and documentary footage with spectacle and song; some in the audience failed to appreciate that blend, seen before in works like the Steve Reich/Beryl Korot opera The Cave. And some Hatsune Miku fans on the Internet took offense to their character being used in a way removed from her usual context, even though the license attached to her character provides for reuse. But I think the music holds up – and I personally equally enjoy this pop deconstruction as I do the tunes racking up the YouTube hits. See what you think:

All of this makes me want to revisit the Vocaloid software – perhaps a parallel review with a Japanese colleague. (Let’s see who’s up for it.)

After all, there’s no more human expression than singing – and no more emotional connection to what a machine is than when it sings, too.

More on the software, with an explanation of how it works (and why you’d want it, or not):

https://www.vocaloid.com/en/vocal_synth/

The post With Japan’s latest Vocaloid characters, another song from the future appeared first on CDM Create Digital Music.

Berghain, by the numbers: data on the relentless Berlin techno club

In the era of fake news and big data for corporations, there’s an obvious antidote: getting actual data for yourself.

So, it’s a given that too many words have been spilt over Berlin’s Berghain. But in trying to portray the club’s hype or mystique, I notice that there’s not often much discussion of its consistency. And to understand how techno and in a broader sense electronic music and the various fashions about it are projected into the world, understanding that consistency is key. If a club is repeatedly pushing out long queues every Saturday and Sunday night (yes, Sunday), and if that is having the influence that Berghain does on bookings elsewhere, on musical aesthetics, and even on how people dress, then part of what you’re actually describing is consistency. These are all measures of repetition.

So, what are the actual numbers? Musician and developer Olle Holmberg aka Moon Wheel is a geek and coder as well as a musician. So, curiosity evidently led him to write a JavaScript app to crawl Berghain’s Website – from late 2009 to present.

You can check out that Google Doc. And of course someone could write a better script – or even try to do other analyses on other clubs.
Berghain — all sets 2009-2017 (data from berghain.de events pages) [berghain.de]

This isn’t revealing any secrets in the club. Quite the contrary: it’s taking public-facing information, and separating the reality from people’s perception.

Now, I’m not one to just say “hey, let’s post a story on Berghain to see if it works as clickbait.” I actually find the results interesting. One thing that particularly struck me about Berghain regulars was their tendency to swoon “oh my God, the lineup this weekend is amazing” – then go on to describe the residents playing on the program.

More analysis will require more work, but we can at least pull up the artists who play most often (and they do so by such a large margin that even minor bugs in the crawling/scripting won’t make so much difference).

The top 25 (from end of 2009, with some minor glitches possible as the program is crawled as plain text):

1. Boris 99
2. Sammy Dee 88
3. Norman Nodge 86
4. Zip 85
5. Marcel Dettmann 80
6. Fiedel 76
7. Ben Klock 75
8. nd_baumecker 73
9. Marcel Fengler 71
10. Len Faki 70
11. Steffi 68
12. Ryan Elliott 65
13. Tama Sumo 63
14. Nick Höppner 62
15. Margaret Dygas 58
16. Soundstream 49
17. Virginia 49
18. Answer Code Request 45
19. Dinky 42
20. Gerd Janson 41
21. Efdemin 40
22. Function 38
23. Kobosil 37
24. DVS1 35
25. Oliver Deutschmann 35

Major disclaimer: this is incomplete data. The opening years of the club are missing. Artists wanting to share their anniversary dates or more complete data or stories, of course, you’re welcome to.

Olle tells CDM that at least one or two people who have seen the numbers have already expressed interest in doing analysis on gender and measures of diversity.

I can at least eyeball these 25. In case you’re wondering, five out of those top twenty five are female, so we’re far from any gender parity even in one of the world’s more progressive big venues. The top of the list is also overwhelmingly white, although it’s also fairly German. (That says something about residents versus guests, of course – and about who is settling into Berlin for the long term. It’s not exclusively German. Dinky is from Santiago, Chile. DVS1 was born in Leningrad, USSR, but grew up in the USA. Boris cut his teeth in the scene with none other than Larry Levan in New York’s Paradise Garage.)

They’re also all there for a reason. The reason for the German representation is also a story about how the music scene in the country has grown up since the 90s, with many of these residents having made their mark in the labels and parties that helped define the scene since the fall of the Wall, whether Sammy Dee and the Perlon label or Ben Klock and Marcel Dettmann and the homegrown Ostgut label. These artists are German, but they tend to come from smaller towns in both east and west parts of the country.

Speaking of consistency and longevity and day jobs, Norman Nodge is even a lawyer.

So if there’s nothing surprising here, what is here is a metric of what is successfully unsurprising. (That also applies to the value many of these names have in booking. See also the Ostgut booking operation, who hilariously warn that they won’t offer table reservations. That’s hilarious because I’m sure someone is regularly writing and asking. I wonder where people imagine the tables are.)

If you scroll through the raw data, you’ll see more of the untold story of Berghain as the larger complex of event spaces and programs. As the Website publishes not only the club’s best-known too floors, Panorama and the titular Berghain, but also Laboratory, Halle am Berghain, and Kantine am Berghain (the former canteen of the power station), including various special events, you’ll get all sorts of names. (Mine even pops up a couple of times through those weird loopholes, without even me having involved North Korean hackers.) In recent weeks, that also includes a more leftfield program at the club’s new Säule space.

But there’s a deeper message, and it’s one about consistency and repetition. Part of what allows us to get your attention in the press is to try to pass off something as new. But behind the scenes, the other thing that press, bookers, publicists, clubs are all doing is actually about priming you to see certain ideas and certain people as important. And that’s in fact about repetition – reinforcing name recognition and making ideas.

So there’s something to that Sunday ritual. For better or for worse, if you look at the top names here, these are really the foundation of this Berghain effect.

This is, of course, just one club, even if a vital one. I think while numbers don’t tell a whole story, it’s great to have some actual data and do some real research. (And the data can be thought of as a first step, not a last.) So I hope, as with female:pressure‘s analysis of gender on festival lineups, we continue to gather data and use more than just our own limited perception to understand music scenes.

Google Spreadsheet

Oh yeah, and if anyone wants to crowd-source fitness tracker data to see how much you’re dancing, let us know!

Updated: In 2010, the club itself published more accurate statistics.

Of course, this article is completely boring to the resident DJs and anyone working for the club, as they have the numbers.

Berghain also archives their programs – which are uncommon for clubland, filled with art and photos but also extensive curatorial commentary and even sometimes poetry and other tidbits.

On the 11th December 2010, they shared some of their own (far more accurate) in-house stats – at which point the total events (from DJs to concerts) had already numbered a whopping 4774.

http://berghain.de/media/flyer/pdf/berghain-flyer-2010-12.pdf

Based on those stats, Boris was again the winner – then having played his 101st set.
Marcel Dettmann: 84.
Ben Klock: 80.
Prosumer: 77.
Cassy: 73.

Those numbers also tel you the missing first years are really significant. (If I read them correctly, it also means Berghain is less about the resident frequency than it once was, which would make some sense. But without the actual data set, that’s just a guess.)

Full details from the program (written in the usual, rather charming way, so I’ll include it for German speakers):

Wie uns unser Inhouse- Statistiker mitteilt, gab es bis einschließlich dem 11. Dezember insgesamt 4774 Auftritte im ganzen Gebäude, einschließlich aller DJ-Gigs, Live-Acts und Konzerte. Soweit die allgemeine Auswertung, aber kommen wir zum heutigen Abend. Konkurrenzloser Spitzenreiter aller zu unserem Geburtstag spielenden DJs (und wir nehmen an, auch insgesamt) ist Boris. Er spielt heute sein 101. Set. Und zwar unten. Tataaa! Ihm dicht auf den Fersen sind Marcel Dettmann mit 84 und Ben Klock mit 80 Kanzelbesuchen. Gewissermaßen schon auf der Überholspur spielen die beiden heute ein back2back Set in der Panorama Bar. Prosumer, ebenfalls oben, kommt auf 77 Sets, Cassy auf 73. Jetzt rattern die Zahlen steil nach unten. Für sein erst 16. Set kehrt Disko zurück. Er hat sich aber auch wirklich rar gemacht. Das Fünfte sicher gerade sein lassen wird Robert Hood – und zwar mit einem House-Set in der Panorama Bar, Nummer 6 gibt‘s sogar gleich danach mit einem Techno-Set im Berghain. Die bisherigen Gigs von .tobias, Chez Damier und DVS1 kann man an zwar an zwei Händen abzählen, aber spätestens hier merkt nun auch der Letzte, dass Statistik nicht zum Feiern taugt. Feste feiern eben, wie sie kommen. So sind Art Department gar zum ersten Mal bei uns und Shed gibt unten die Live-Premiere seines straighten Equalized-Alias

The post Berghain, by the numbers: data on the relentless Berlin techno club appeared first on CDM Create Digital Music.

Apple announces that they’re not ready to announce new pro hardware

Apple today summoned a handful of tech reporters to a product lab, essentially to announce that … they were between announcements.

Apple’s unusual PR experiment today was to mix mea culpa and product teaser, in a drawn out explanation of why their hardware wasn’t shipping. The result of this messaging technique: journalists in the room for the briefing dutifully recorded the agonizing details of how Apple sees its “pro” user base and how it prioritizes desktop functionality:

The Mac Pro is getting a major do-over [Mashable]
Apple pushes the reset button on the Mac Pro [TechCrunch]
The Mac Pro Lives [Daring Fireball, who at least added some more reflection]

Journalists not invited to the same briefing tended to go to an angle more like this:
Apple admits the Mac Pro was a mess [The Verge]

There are two questions here, though, as I see it.

Question one: what’s a pro user, anyway?

It’s easy to dump on Apple here, but one thing I will say is that they’ve historically understood the first question better than any of their competition. Gruber was actually the only writer who seemed to pick up on Apple’s intention there. And, frankly, the results were telling. One big revelation (if an unsurprising one): most Mac users aren’t pro users. Defining the percentage of Mac users who use apps for serious creation and software development as pros at least once a week, Apple found only 30% of users count. For more regular use, that number drops to 15%. And notebook computers (MacBook) dominate both that pro market and the overall Mac user base, at 80% (I think that’s by revenue, not number).

Catering to slivers of that group can’t be easy. When users talk about “pros,” what they really mean is themselves, individually. And that market is full of endless variation.

CDM readers are routinely doing far more specialized things, like virtual reality experiments or live visuals or running 3D game engines onstage or programming robotic drum ensembles. That may sound extreme to even cite as an example, but remember that over the years Apple Computer (under Jobs but also under other CEOs) did sometimes refer to exactly those kinds of weird edge cases in, you know, expensive TV ads. In fact, today, you still see edge cases cited in iOS ads.

Question two: what hardware do you make for that user?

If pro users are by a definition an edge case, and desktop a subset of that, and advanced desktop another slice, we’re talking ever-smaller bit. It’s not totally clear what Apple sees as important to that group, actually – and it’s even murkier what they intend to do. Here’s what Apple did clearly say publicly, though it was more about what they aren’t doing than what they are:

What they aren’t doing:
They’re not shipping new iMacs until later this year.
They’re not shipping a new Mac Pro in 2017.
They’re not shipping a new dedicated display in 2017.
They’re not shipping a largescreen dedicated touchscreen or a product like the Surface Studio, and they say the Mac Pro user they’re targeting isn’t interested in that.

What they will be doing in the future:
There will be a new iMac this year, and it will cater to pro specs.
There will be some kind of ground-up redesign of the Mac Pro, and it will be “modular” (which I could interpret from context only as meaning there’s no integrated display).
There will be a display to go with it.

What they didn’t entirely rule out:
Federighi followed up ruling out touch for the Mac Pro user by mentioning a “two-prong desktop strategy with both iMac and Mac Pro.” (I wouldn’t interpret that as a promise of a touch iMac, but it did seem to leave the door open. Then again, he also was responding to the question of the Microsoft Surface Studio, which seems a lot like what a touch iMac would be.)

What they’re shipping right now:
There’s a new Mac Pro configuration. You won’t want it, though, as it only swaps a new CPU and GPU config for the existing model – so you’re still stuck without modern ports (Thunderbolt 3, USB-C). It’s also bloody expensive:

US$2,999 now buys you a 6-core Intel Xeon processor, dual AMD FirePro D500 GPUs and 16GB of memory. That’s £2,999.00 (UK)/ €3,399 (Germany).

US$3,999 gets you an 8-core processor and dual D700 GPUs. £3,899.00 (UK) / € 4,599.00 (Germany).

Each of those has 256GB of internal storage. It does not include a mouse, keyboard, or display. Memory, storage, and graphics are upgradeable options, but they’re expensive — the base model with 32GB of RAM and 1TB of internal storage will run you US$3,999. (Maximum is 64GB of RAM, 1TB of SSD.)

Those are middle-of-the-road CPU and GPU specs, too, given what’s now available in desktop factors in larger form factors.

What did we learn?

Uh… nothing? Well, we learned that Apple isn’t eliminating the iMac or the Mac Pro. We just have no idea what they’ll look like.

Look, I’ll be honest: this is weird. Apple has a decades-long record, under multiple different leadership teams, that demonstrate the importance of letting shipping products do the talking rather than future products, and focusing on user stories over specs. Today feels a bit like there was a transporter accident and we a reverse-universe Apple that did the opposite.

The only thing missing was Tim Cook showing up with a beard.

Windows I think has some opportunities here – not least because Apple for some reason decided to make headline news of its own shortcomings rather than its strengths. In theory, the Windows PC ecosystem has always been better positioned to cater to specific edge cases through hardware variety, and things like music and motion qualify. In practice, though, it’s down to whoever delivers the best user experience and overall value.

If Windows continues to improve the OS experience and offer competitive hardware options, I don’t doubt that we’ll see some re-balancing of the OSes used by creative users.

This is nothing new; we’ve seen regular oscillations between platforms for decades. But I think the next months will be revealing; you compete with what you’re shipping, and PC makers keep shipping new stuff while Apple isn’t.

The post Apple announces that they’re not ready to announce new pro hardware appeared first on CDM Create Digital Music.

The first generation of CDs is already rotting and dying

Digital media is a double-edged sword. Digital data itself can be duplicated an unlimited number of times without any generational loss – meaning it can theoretically last forever. But digital storage on physical media is subject to failure – and that failure can render the data inaccessible. In other words, archivists (including you) have to transfer data before the media fails.

And we’re already entering an age when one of the most popular formats is reaching the start point for common failures.

A report by Tedium (republished by Motherboard) demonstrates one of the most alarming failures. Some media, evidently using faulty dyes, can fail in under ten years, via something unpleasantly dubbed “disc rot.”

The Hidden Phenomenon That Could Ruin Your Old Discs

At issue is the fact that optical media uses a combination of different chemicals and manufacturing processes. That means that while the data storage and basic manufacturing of a disc are standardized, the particulars of how it was fabricated aren’t. Particular makes and particular batches are subject to different aging characteristics. And with some of these failures occurring in less than ten years, we’re finding out just how susceptible discs are outside of lab test conditions.

In short, these flaws appear to be fairly widespread.

That just deals with a particular early failure, however. In general, CD formats start to fail in significant numbers inside 20 years – on average, not just including these rot-prone flawed media.

What’s tough about this is that the lifespan can be really unpredictable. Before you dismiss the CD as a flawed storage format, many discs do reach a ridiculously long lifespan. The problem is really the variability.

To get an accurate picture, you need to study a big collection of different discs from a lot of different sources. Enter the United States of America’s Library of Congress, who have just that. In 2009, they did an exhaustive study of disc life in their collection – and found at least some discs will be usable in the 28th Century (seriously). The research is pretty scientific, but here’s an important conclusion:

The mean lifetime for the disc population as a whole was calculated to be 776 years for the discs used in this study. As demonstrated in the histograms in Figures 18 and 19, that lifetime could be less than 25 years for some discs, up to 500 years for others, and even longer.

COMPACT DISC SERVICE LIFE: AN INVESTIGATION OF THE ESTIMATED SERVICE LIFE OF PRERECORDED COMPACT DISCS (CD-ROM) [PDF, Preservation Directorate, Library of Congress]

Other research found failures around 20-25 years. That explains why we’re hearing about this problem round about now – the CD format was unveiled in 1982, and by the 90s we all had a variety of optical disc storage to deal with.

There are two takeaways – one is obviously duplicating vital information on a regular basis. The other, perhaps more important solution, is better storage. The Library of Congress found that even CDs at the low end of life expectancy (like 25 years) could improve that lifespan by twenty five times if stored at 5 degrees C (41 degrees F) and 30% relative humidity. So, better put that vital collectors’ DVD in the fridge, it seems. That means instead of your year-2000 disc failing in 2025, it fails in the 27th Century. (I hear we have warp-capable starships long before then.)

But anyone using discs for backup and storage on their own should take this even more seriously, because numerous studies find that writeable CD media – as we purchased with optical drives in the 90s – are even more susceptible to failure.

There are many other issues around CDs, including scratch and wear. See this nice overview, with some do’s and don’ts:

CD and DVD Lifetime and Maintenance [wow, 2007 Blogger!]

Or more:
CDs Are Not Forever: The Truth About CD/DVD Longevity, “Mold” & “Rot” [makeuseof]

I’ve seen some people comment that this is a reason to use vinyl. But that misses the point. For music, analog storage media still are at a disadvantage. They still suffer from physical degradation, and reasonably quickly. For digital media, hard disc failures are even more frequent than CDs (think under three years in many cases), and network-based storage with backups more or less eliminates the problems of aging generally, in that data is always kept in at least two places.

The failure of CDs seems to be more of a case of marketing getting divorced from science. We’re never free of the constraints of the physical world. As an archivist will tell you, we have to simple adapt – from duplication to climate control.

But I’d say generally, with network-connected storage and automation, digital preservation is now better than ever. The failure point is humans; if you think about this stuff, you can solve it.

The post The first generation of CDs is already rotting and dying appeared first on CDM Create Digital Music.

Why KORG Gadget on the Mac is a big deal

Remember when some pundits thought we were all going to dump our laptops and switch to tablets and iPads? So – not so much. But mobile platforms are having a big impact on music software – and KORG Gadget, now making the leap from iOS to Mac, may be most emblematic of that.

Who is KORG Gadget for? Well, sort of for everyone. Beginning users can find it a nice way to play around – and might well try this before desktop software. More advanced users are likely to find it an appealing set of tools, but would want to use it to extend other hardware and software – on the go, or integrated with those tools when they’re at home or in the studio ready to work.

If you haven’t tried it and you’ve got an iPad (or iPhone, even), Gadget is great – fun to play, lots of tools, and lots of great sounds. KORG also have nailed the smart approach of adding modules in a way that’s fun, so that adding additional instruments feels a bit like getting a new cartridge for your Game Boy or adding a stomp box to your pedalboard.

Gadget started on these Apple things.

Gadget started on these Apple things.

Now, adding Mac support fills in some gaps – especially because of how KORG has gone about it. This looks like a template for what software development in 2017 should be:

Social. Allihoopa is just emerging as a way of sharing music with other producers, but KORG are embracing it. (The sharing site began its life with Propellerhead before being spun off. So naturally Reason, Figure, and Take all have integration – and KORG Gadget, too.) That seems essential, given the signal-to-noise problems sharing music online.

Synced. Ableton Link support, also quickly becoming a must, means you can sync with Ableton Live, Reason, Maschine, and other apps on desktop, plus loads of apps on iOS – so, easy local sync on your computer between software tools, easy sync between computers, easy sync with mobile, whether you’re playing alone or jamming with other people.

Wireless. There’s Bluetooth MIDI support, too. For new users, this means the possibility of using hardware without thinking about wires and MIDI adapters.

It makes sense on your computer screen. Full-screen apps are a bit silly on the more generous screen real estate on your desktop, so KORG have opted for a four-app split-screen approach that makes loads of sense.

Complete plug-in support, when you want it. AU (for Logic and GarageBand), AAX (for Pro Tools), and VST (for everything else) are all supported. There’s even NKS support, which lets you integrate with Native Instruments hardware and software easily. (For instance, you’ll get physical controls on NI’s Maschine hardware and keyboards.) The upshot of this: all those clever independent instruments and effects from the iPad are now just as modular on the desktop, dropped into whatever your software of choice is.

On the go and back again. The whole point of this, of course, is the ability to complete workflows between desktop and mobile seamlessly. And that’s where a lot of conventional software from Native Instruments, Ableton, Propellerhead, and others are a little uneven (partly because they began their life on desktop). Here, you have essentially the same tools in both places.

Gadget on the Mac also brings some new devices – a 16-pad drum machine, and two new audio recording tools.

But there are two paths here – the beginner and the more advanced user. Beginners may find this a way to start to take steps from mobile to desktop tools (and hardware). Advanced users may come from the opposite direction – trying Gadget with or without an iPad, and integrating on-the-go or casual use with sitting down seriously at a computer and finishing a track.

This gets us out of a cul-de-sac in music making software that we’ve been stuck in for a few years. Desktop software has always tended to be more complex and larger, with fairly monolithic tools that try to appeal to everyone, but then tend to turn off newcomers. Mobile software may seem like a way out of that, except that the low price points users demand on the app stores make it hard to justify development costs. Innovation on both tends to be stymied by those same problems.

So, imagine instead that you combine the benefits of both.

KORG Gadget is then just one small step. And it’s also limited to Apple platforms – just as Windows gets a bunch of interesting hardware. But it could be a nice sign of things to come.

We’ll be watching closely to see how KORG prices Gadget on the Mac versus mobile, what the experience is like on desktop (since we’re judging only by iOS), and who embraces it.

But it’s very nice to see an option like this that looks friendly to beginners, without forcing advanced users to give up their way of working. We’ll be eager to test it.

Also, lest it seem like I’m waxing poetic about Gadget for no reason — I’m very much indebted to other people who have spent loads of time working out how to get the most out of it and making great music. Our friend Jakob Haq has done some nineteen videos so far for Gadget alone, and it’s chock full of tips and musical inspiration.

Have a look – as these videos might be relevant to you for the first time if you’re on the Mac but don’t have an iPad:

http://www.korg.com/us/products/software/korg_gadget/

The post Why KORG Gadget on the Mac is a big deal appeared first on CDM Create Digital Music.

Sennheiser wants to bring 3D audio recording to the masses

The consumer electronic drive to high definition and virtual reality is having a curious, parallel impact on sound. And so it is that Sennheiser now want to market binaural recording to your average smartphone owner – really.

Now, of course, the normal human perception of reality includes both visual depth perception and the ability to localize sound in a 360-degree sphere around the head. That is, provided only one’s eyes and ears are fully functional and each pair is intact, the human brain adapts to these perceptions.

But “3D” visuals and “3D” sounds aren’t themselves directly connected in terms of technology. Firstly, until we begin connecting directly to the human brain, any of the tech billed as 3D is illusory, aimed only at creating sensations that remind us of our normal perception. (And, remarkably, for years even two-dimensional images and monophonic sound sources do a pretty reasonable job!)

From a marketing standpoint, though, the connection is more real than ever.

And what I think may be exciting to music and audio enthusiasts is that this means specialist technology we’ve loved for years is suddenly becoming mainstream.

ambeo-smart-surround_3

Sennheiser’s AMBEO isn’t itself revolutionary, apart from the fact that it’s marketed to the masses. It’s a binaural microphone recording system that adds mics to conventional in-ear headphones. The personal nature of audio here offers an advantage: because you recorded with your own skull wearing the headphones, you’ll be able to play back the same recording with what I imagine is a sensation of “being there” again. That is, the mics were in your own head, so the sound will seem to you to be natural.

There’s also an accompanying VR microphone with a capsule pictured here, though Sennheiser haven’t provided any other details of that. I’ll try to get hands-on with this hardware soon.

Sennheiser haven't said much about this mic. Technologically, it's unrelated - basically, it seems to be a 3-capsule condenser for more precise spatialization. But it also demonstrates that more products are coming under the 3D sound rubric.

Sennheiser haven’t said much about this mic. Technologically, it’s unrelated – basically, it seems to be a 3-capsule condenser for more precise spatialization. But it also demonstrates that more products are coming under the 3D sound rubric.

Sennheiser makes a really weird claim in the press release – they say that they contributed to the first wave of binaural audio by introducing the first open-ear headphones. Uh – no. But that said, I think Sennheiser are the ideal brand to introduce this tech to the listening public, especially with their combined prowess in mics and headphones and their ability to produce both leading pro and leading consumer solutions.

ambeo-smart-surround_remote

There is an element missing here. So, these binaural recordings will sound really three-dimensional – to you. But give them to someone else, and because they’re essentially listening to a recording made with your skull, the results won’t be as effective. What’s cool about the AMBEO line, though, is it’s the first step. The next step, I think, will be self-calibration routines in software.

And we’re practically there already. Remember that 3D scanning app Microsoft showed lately? If you can take produce a three-dimensional model with your phone, you can adjust sound playback for each listener’s heads.

It’s going to look a little weird doing the calibration routine, in that you’ll be waving your phone around your head, magician style. But you would only do that once for each listener – and there’s no special hardware to wear, either. (Take that, VR helmets.)

This is also the latest evidence in why the move to digital headphones and away from headphone jacks isn’t necessarily such a bad thing.

That said, the connector is an issue. Whereas Android vendors are using a standard USB-C port, Apple continue to insist on Lightning. To add insult to injury, they’ve missed the opportunity to add their own proprietary port to their own line of laptops – I think there’s absolutely no rational explanation for why the new MacBook Pro standardizes on USB-C but lacks Lightning, unless Apple are planning to themselves dump Lightning for USB-C.

ambeo-smart-surround

But let’s not get too hung up on that. The long view is still a positive one.

And it involves not one but two transformations. Not only do you start recording and playing back sound in a way that’s more naturally spatialized than stereo, but you open new possibilities by adding dedicated microphones to headphones.

We’re entering an age that could really change how people listen and record sound. There are applications for deep listening, field recordings, sound walks, for acoustic ecology and sound sensing, for fitness applications that are mindful of exposure to sound and potential hearing damage. Oh, yeah, and … I for one welcome all the mad amounts of bootlegging that will invariably occur. But maybe that’s because I always flake and don’t record my sets.

Mark my words: even if this specific Sennheiser product flops, this stuff is the future. And it’s been a long time coming.

Capture your world in 3D

The post Sennheiser wants to bring 3D audio recording to the masses appeared first on CDM Create Digital Music.

Surprise, Final Cut Pro could be the MacBook’s killer feature

Here’s an unexpected twist in the plot: Final Cut Pro, the product that perhaps more than any other earned ire from users for not being “pro,” might be the thing that sells you on the Mac.

Why? Final Cut Pro is really, really fast.

After all, paper specs don’t matter. It’s really world performance in the software you use that counts. And there, Final Cut Pro is a bit of a champ.

Indie tech reporter / filmmaker Jonathan Morrison has a snappy review that gets to the point.

Now, first, you’ll read a lot of reviews complaining the MacBook Pro isn’t really “Pro.” They mean that literally. Apple inexplicably made a clearly differentiated line, with a 13″ with only two USB-C ports and no Touch Bar, and then 13″ models with USB-C ports, a Touch Bar, and faster graphics. But they didn’t give the low-end model a different name, like MacBook, which means there’s no point to having anything called “Pro” anyway.

More on the models in a moment.

But the important thing in Jonathan’s review is the speed of Final Cut Pro versus Premiere, the other popular choice. His benchmark is just an H.264 render, but that’s exactly the kind of thing you’d notice when you’re up against a deadline – and even with a slower CPU inside, the Mac smokes the PC.

Johnathan isn’t the only one pointing this out. In fact, I’d say Final Cut is generally fast enough that you actually feel the different subjectively. You can toss loads of high-res footage at the software and you’re almost never waiting for a render. For sheer performance, Final Cut and Compressor are a beautiful combination.

And sure enough, liking Final Cut Pro X makes pros feel differently about the Mac. Here’s another example:
One Professional’s Look At The New MacBook Pro [Huffington Post]

That writer is an editor with Trim Editing in London. As he puts it:

First off, It’s really fast. I’ve been using the MacBook Pro with the new version of FCP X and cutting 5k ProRes material all week, it’s buttery smooth. No matter what you think the specs say, the fact is the software and hardware are so well integrated it tears strips off “superior spec’d” Windows counterparts in the real world. This has always been true of Macs.

He also praises Touch Bar support in Final Cut, which is to me definitely a place where it makes lots of sense, since video editing necessarily includes a lot of contextually specific parameters and commands. I can also imagine it’s handy when editing on the go (at least until Apple unveils an external Touch Bar keyboard, which they really ought to do).

This still may not necessarily be a reason to buy the new Macs – but it might be a reason to look at, say, a spec’ed out previous-generation model on sale. And it’s definitely something to consider when comparing Mac and Windows laptops.

Also, what was largely missed in the midst of the hullabaloo over the laptop was a significant update to Final Cut Pro X software.

screenshot_644

Meet 10.3

Final Cut Pro X 10.3 is simply the first update in the X series to actually be excited about.

Most noticeably, there’s an all-new look. It’s amazing to me how much of a difference this makes, even if it’s partly psychological. The UI is cleaner, even though structurally it’s the same fundamental UI from the previous Final Cut Pro X. That means more room to work and less of a feeling that the UI is distracting.

Just as importantly as the fact that the UI’s new aesthetics mean more room, you can finally make custom window layouts or hide the Timeline or stick the Timeline on another display. You can also use Thunderbolt to drive an external display.

So, it’s fast, and the UI is nice. That’s little comfort if you just don’t like the way you edit in FCP X (especially if you were an FCP 7 devotee or switched over to Premiere).

But Apple has worked on the Magnetic Timeline, too. First off, I think audio handling in Final Cut is now more enjoyable than any program since Vegas, and that one came from an audio developer. You can use audio “roles” to fluidly view, manage, and edit complex project audio. Roles and color coding are generally expanded.

There’s also Wide Color support, which works in conjunction with those new Mac displays.

Deeper down, there are lots of minor improvements that add up to the program feeling more intuitive, including enhancements to the already-terrific multicam support in FCP X.

Parts of the program still feel like iMovie Pro rather than Final Cut, but then Premiere can sometimes go there, too.

I don’t think this will necessarily win you over if you’re more productive editing in Premiere. But I do think that Apple has done a lot to finally address the stuff that annoyed users about Final Cut, and to hammer a lot of quality issues. If you haven’t used Final Cut Pro lately, you really won’t be aware of this stuff.

I still don’t understand why things like video export in QuickTime are hobbled (requiring a trip to Compressor), but I can say there’s at least some reason to use this program.

Also, I’d love to see that dark UI in Logic. Given the new direction of Final Cut, I’m really curious to see where Logic Pro goes next.

There’s a lot in 10.3; see it here:
Final Cut Pro X release notes

Ouch. It hurts.

Ouch. It hurts.

So, back to those MacBooks…

It’s not just the product; it’s also how you tell the story of the product. And I think there’s no question that Apple told the story of the new MacBook Pro poorly – at least from the pro perspective. Consumers may well have warmed to product.

Certainly, it’s selling well:
2016 MacBook Pro Sales Defy Critics: Tops All New Laptops With Shoppers

Mostly what that says to me is that people shopping don’t really worry about reviews here. For one, I think a lot of people still just want a Mac. All they needed to know here was that new models had arrived, at last. Also, the complaint from me and other pro users was not that the MacBook Pro was generally deficient, only that it didn’t match our own expectations and needs.

What I will say about the new Mac line – it’s really expensive, even with Apple discounting its adapters.

The basic 13″ model, without the Touch Bar, starts at US$1499 with an anemic 256MB of storage and 8GB of RAM, plus slightly slower graphics, and only a 2GHz dual-core Intel i5. Now, you could certainly dispense with the Touch Bar and just load that model up with RAM and storage, but then you’re stuck with only two Thunderbolt ports. Since one of those is used for power, that’s probably not going to make you very happy.

So, more likely you start with the US$1999 model, which finally gets you 512MB internal storage. Upgrade it to 1TB internal storage and 16GB of RAM and you’re at $2599 for a 13″ dual-core notebook with no dedicated GPU. That’s… pretty crazy.

For quad-core CPU and dedicated GPU, you really want the 15″ model. Even the basic model, with only a 2GB GPU, is going to run you $2999 for 16GB RAM / 1TB HD. Upgrade the CPU and GPU one step and you’re at $3499.

So, why would you do it? Well, there are certainly some advantages in Apple’s court.

All reviews of the display have been terrific, and it does give you full color.

Those hard drives are best-of-breed fast, faster than what you get in competing models – which partly explains Apple’s higher price. (But that means you do really want to upgrade them to more storage space when you purchase, since otherwise you can’t take much advantage of it working with media.)

The Touch Bar, while a gimmick, does appear to offer some useful customizable shortcuts, though I wish it included haptic feedback when you touch it.

I’ll be honest, though. On a budget, I’d be inclined to get a high-end model of the last generation MacBook Pro – especially if I were using a desktop monitor and not so worried about the improvements to brightness and color gamut.

Also, anyone considering the new models would do well to wait a few months while the accessory situation for USB-C and Thunderbolt becomes clearer. I haven’t heard audio manufacturers certifying these machines yet, and anyone spending this much on a notebook computer will want to avoid any potential compatibility issues.

Early compatibility tests are not encouraging. I think it’s better to wait and get some data on what works reliably – and maybe see if there are driver or OS updates, too.

Look, Apple’s products are exceptionally reliable, exceptionally cool and quiet (which matters a lot in audio), and exceptionally high end.

The reason some of us are looking elsewhere is, this is enough of a price/performance difference to shop around. Windows has done a lot of improvement in the audio side. And on the live visual side, having a GPU is a real advantage. My friend Tarik Bari, for instance, was an early adopter of the apparently now-defunct Mac Pro. It meant the ability to drive high-res visuals. And Tarik is a huge macOS fan. Now, I know even Tarik was frustrated with the latest Mac offerings, as is everyone I talk to who does live visuals. This is probably a niche so small we number in the dozens, but – that’s the nature of the general-purpose PC as a product. It serves lots of tiny niches.

For sheer GPU power, laptops like the Razer Blade give you desktop GPUs in a form factor and price that’s similar to Mac laptops that lack even dedicated GPUs.

I’m eager to try one to see if fan noise is distracting.

Meanwhile, the Dell XPS line is a good tradeoff – modern specs, not quite the latest gaming GPU as the Razer, but well balanced. One of my colleagues has this in the office, and it’s a really fine machine. It’s quiet, it’s fast, the display and build are great, and … oh yeah, it’s dramatically cheaper than the Apple.

Should Adobe just go and make Premiere faster? Yes, please. Imagine what it could do on this fast hardware if given the chance. But meanwhile, with a diverse range of apps, these specs actually should transfer into real-world performance.

Check the spec sheets on any of these – every Mac user I know who has was floored. You get all the new ports (like Thunderbolt 3) without having to give up basic amenities – proof that this isn’t just Apple “looking forward.” And the price is certainly competitive. You can also (cough) go the Hackintosh route with these machines. (Not that I’m supposed to say that, of course.)

Also, I agree with long-time Mac advocates lamenting the loss of the Mac Pro.

So that’s the equation. Apple’s still the high-end option, and still appealing if money is no object. But their offerings are limited to mid-range GPU hardware (charitably), not the latest gear, and the price difference is pretty huge.

The post Surprise, Final Cut Pro could be the MacBook’s killer feature appeared first on CDM Create Digital Music.

How expressive input, immersive 3D might make PCs cooler than Macs

“Pro.” “Creative.” They’re words that are repeated so often in computing it’s easy for some people to forget what they mean.

By definition, though, if a “professional” is getting paid for their work, investing in more power to get their work done has a return on investment. And being “creative” on a machine means pushing it to the limits of expression. This may be the post-PC era after all, but that ought to mean we get computers that focus ever more on those use cases.

Remember Jobs’ infamous quote about trucks? Embedded in his thinking was an answer to what the traditional computer would look like in the era of ever-smarter mobile devices. It would get more specialized – more focused on niches who had more demanding needs. And given Jobs’ own history (including some of his failures, as at NeXT and with Pixar’s abortive hardware entry), he was intensely interested in how to serve those kinds of people.

Last week’s coinciding Apple and Microsoft events made a study in contrast.

Apple wasn’t remarkable so much as it was business as usual. Apple delivers a new generation of its machines. It’s faster, it’s lighter, it’s thinner. It isn’t cheaper. If all you wanted was a new MacBook Pro and for it to be faster, lighter, and thinner, then you probably wound up happy.

The difference last week, though, was that Microsoft was talking about real creative and professional applications. And – surprise! – for once, it had more to say about that than Apple.

Microsoft did have its usual sprawling event. And as is often the case at Microsoft events, some of the interesting things they showed aren’t out yet. (Apple under Cook, as under Jobs, focuses strictly on the products they’re making available.)

But the reason I think Microsoft made a compelling case was that they offered some products that represent new ideas. And they gave pro users some things those customers really want.

There are two trends happening. Microsoft is on them, I think they’re meaningful to our market segment, and Apple is clearly choosing not to pursue them.

And that’s making devices that are more touchable and tangible, and more three-dimensional and immersive.

Desktop touch is finally a thing

One focus is clearly on input.

Surface Studio is a pretty niche product, but it certainly seams to speak to visual artists. I’m also intrigued by how it might work as a studio music machine. As on the Surface Book, Microsoft opts for a 3:2 aspect ratio. But it’s a product you might well expect to come from Apple – a new form factor for desktop computers, and perhaps a new class of computer. It’s expensive as hell – US$2999 is the base model. On the other hand, I think it makes a compelling case for its existence in a way the Mac Pro didn’t. You spend more cash, you get this enormous display with touch and pen input.

Surface Dial is a physical knob, somewhat reminiscent of the Griffin PowerMate if anyone remembers that. It’s a haptic input device, with clever added functions if you touch it to the Surface models. (Surface Studio only initially, though it seems they’re possibly bringing to other models later.) And it’s already got app support.

Surface Studio may be more important than it initially appears, too. By giving you such an enormous display, Microsoft also lends justification to adding touch to Windows. It’s tough for a 10″ tablet running Windows 10 to face off against iOS, with interface paradigms built from the ground up for touch. But on a larger display, touch in almost any app becomes more appealing.

And Surface Studio you can think of as literally a canvas for developers. I don’t doubt for a second developers are going to be excitedly buying this thing – I know a few who are. That includes in our music segment. This is why Microsoft’s new hardware strategy ultimately benefits OEMs. It solves the chicken and egg problem of needing new hardware to get new apps to get new hardware.

Google may have an awkward relationship with its OEM phone and tablet makers. But we’re talking Microsoft here – this is the company that invented this ecosystem model. They’ve been building up relationships with hardware makers since the Reagan Administration.

Surface Studio on its own I think really isn’t competitive with the iPad Pro and Pencil. Those are terrific products, Apple Pencil performs beautifully, and because these are mobile devices, some people will get them subsidized by their mobile provider.

But there’s still a story here. Apple’s iOS updates aren’t necessarily in sync with what music developers want. And there’s strong incentive for music developers who sell products for $300, $400, and $500 to stay on desktop operating systems. (Why would Ableton start selling Ableton Live for $19.95?)

What the desktop ecosystem lacked was a flagship. Surface Studio is that flagship. And it’s a prime target for developer expense accounts to start looking at the platform.

It’s still going to be an uphill slog. Windows’ UI is still stuck in the desktop era, and the apps aren’t there yet. But if you want to create a new category seemingly out of thin air, you need an exciting device, which is what Microsoft pulled off. Just ask Apple how important that is.

Take a look at this video, though. While Apple had you tapping emoticons on a function row touchscreen and adjusting sliders and attempting to DJ with the top of the keyboard, Microsoft had a compelling demo with an enormous screen and serious third-party applications. And, oh yeah – actual users and use cases, not just marketing executives showing canned demos.

Entering the third dimension

Knob, touch, and (finally) usable stylus input all represent the expressive, tactile input side of the equation.

The other side of this is the full embrace for three-dimensional graphics paradigms, virtual reality, and augmented reality.

I’m wary of the hype around some of these issues. Consumer electronics makers usually try to create demand for things that allow them to sell more hardware. So they see VR as a cash cow: you’ve got a buy a new computer, with a new more powerful graphics card, with some headsets. VR seems all too easy as the sky-high pricey salvation of the sagging PC industry.

Taking the long view historically, though, there’s something there. For decades, almost all computing has been done in two dimensions, outside of games. And it took a long time for even that paradigm to take hold: early graphics in the 60s, XEROX PARC in the 70s, the Mac in the 80s, Windows only going mainstream in the 90s.

Our brains can think in three dimensions, and computing has been about nothing if not feeding our brain with familiar stimuli. Art technique has worked with tricks of virtual perspective for centuries. It seems the computer is due to catch up.

So this isn’t just about donning silly-looking goggles. Microsoft I thought had a really compelling view of 3D end to end. It’s capture of 3D data on phones. It’s their innovative new paint program. It’s full support for 3D information integrated in the Windows update coming early next year. (The sand castle was an elegant example.)

Watching this video brought back memories from me of using the Mac for the first time, and first seeing two-dimensional graphics in Apple’s ground-breaking HyperCard and paint apps (thanks, Bill Atkinson). Of course, now I think Bill’s legacy is alive at Microsoft. (Bill is a personal hero of mine; I was fortunate to meet him once at Macworld, where he was touting his exploration of advanced photography of cross sections of rocks – seriously.)

Now – I’m sure Paint 3D isn’t right for every task. But it also makes a compelling case for touch and pen input on Windows, something available on iOS but absent on the Mac. And I’m sure that’s the point.

The capture capability – being able to form 3D models just by pointing your phone at an object – is simply insane.

Really hoping these apps use standard 3D file formats.

Virtual reality is what we often think of, the solitary experience of a virtual world blocking out the real world around you. But people in fields from architecture to industrial design have long contended with 3D. I think Microsoft’s aim to bring this stuff to the masses is admirable. That can be about augmented reality (as with their HoloLens) and 3D information in general.

While Apple was investing in smart watches and TV, Microsoft was making “holographic” technology an entire platform pillar. I hate the misuse of the word “holographic,” but the platform is cool – and while HoloLens is a hugely pricey research project for now, consumer products around both augmented and virtual reality are imminent.

https://www.microsoft.com/microsoft-hololens/en-us

I’m simplifying here, intentionally, because the VR landscape gets … messy. There’s some nice analysis on The Verge.

Microsoft’s other competition is clearly mobile-focused vendors entering this arena. Their 3D capture app was running on a Windows phone, but the implication was that it’d come to iOS.

No matter. Desktop computers are the ones with advanced 3D graphics. And even with mobile catching on, a platform with good 3D support could well become the authoring platform. Just as Apple’s iOS App Store drove purchases of Macs for development, so too could this tech stimulate the PC as a 3D creation tool for people who hadn’t even thought of themselves being in the 3D creation business until now.

And the rest of the PC ecosystem

I don’t think you have to be donning a VR helmet or drawing a web comic on a Surface Studio. Last week was a week to reconsider Windows regardless. The Microsoft announcements, whether they were relevant or not, just added to great theatrics.

And what music and and creators were discovering was that some of the things we’ve been putting up with on the Mac aren’t so with Windows. So while Microsoft’s Surface line is very premium, in line with Apple’s price points, there are other options.

The Surface Book and Surface Studio themselves offer added expressive features missing on the comparable MacBook Pro and iMac, respectively. But after that, it gets more interesting. Pay the same, but get a desktop class graphics card and loads of ports – no adapters needed. Pay less and get faster graphics and more ports. Get machines with extra power – even at the cost of battery life and heat, but if you so choose to fit your needs. Get matte displays and other options.

Some of this equation really is new. After years of a race to the bottom, PC vendors finally looked at Apple’s offering and their own collapsing profits and reevaluated the industrial design of PC laptops. The post-PC era has had an unexpected side effect: it’s pushed PC makers to make more advanced, high-end laptops, including for the creative segment.

In other words, instead of laptops going away or merging with mobile devices, some have become more like Apple. Only unlike Apple, these devices typically add new buses and connectors (like Thunderbolt and USB-C), rather than take away the old ones (HDMI, legacy USB, SD card, and so on).

Now, don’t get too excited too fast.

I fight for the users

Some of the tradeoffs Apple makes that so frustrate pros also give us stuff we like. So, sure, you get slower GPUs – but you also don’t get fan noise. And you pay more – but you get a machine that’s uncommonly easy to service in a hurry (because of Apple’s network of repair shops). And one with really good design and build.

There are things to like about the new MacBook Pro – yes, the one I was complaining about. The big trackpad holds some potential. The Touch Bar should let you load handy shortcuts in some apps. And if you prefer macOS, this means the older machine is cheaper, and the newer machine is marginally faster.

Still, it’s sad to see the Apple desktop left out of native pen and touch input. It’s frustrating to watch the PC platform embrace the capabilities of 3D when the Mac doesn’t do the same.

To get more detailed, it’s also disappointing that Mac users can’t play along with powerful new capabilities of NVIDIA graphics chips (even the AMD chip costs $2399 to start, and there’s no NVIDIA option). It’s been frustrating that graphics and audio subsystems have sometimes been unpredictable in recent OS updates.

The ideas from Microsoft aren’t perfect. We still have a lot of testing to do. But at least there are new ideas. These are really efforts to explore how you interact with a computer – not a clever (or even useful) gimmick, but some thought into fundamentally how we use the machines.

I went back and skimmed some moments from computer unveilings past. Even in the end of Jobs’ tenure, Apple’s pitch for the Mac was slowly evolving from something that centered around users and what they did with the machines to what sounds almost like a description of supply chain and engineering instead.

I don’t want to make a Mac versus PC argument – that’s not what this is about. In music and visuals, I recall pretty vividly when we were arguing the AMIGA and Atari, too. Platform competition is good. Things change.

But I do hope that whoever is playing, the future of the computer is focused on what people can do. And I believe the way to excite that world is to push the capabilities of those computers as far as possible.

The post How expressive input, immersive 3D might make PCs cooler than Macs appeared first on CDM Create Digital Music.

Visualists, here’s the info on the GPU in the new Macs, Surface Books

The audiovisual performance is very much alive as a medium. I’m just coming off two festivals full of inspiring, stunning live visuals (alongside installations and virtual reality artworks). (One was MUTEK Mexico, the other the AV-centric Lunch Meat in Prague.) Live visuals are the definition of an edge case, to be sure – artists appropriating technology developed primarily for gaming – but life is beautiful on the edge.

The big demarcation point in computers for visual work is really the absence or presence of a dedicated GPU. Intel’s integrated tech has gotten better on paper, but it’s still in my experience fairly useless for even simple video performance in practice. Shared memory and shader incompatibilities often cause massively unpredictable performance. They also tax the CPU which you’d rather dedicate to audio if you’re running A/V shows solo.

Now, that being said, it may not be so important which GPU you have – that’s dependent on whether visual artists are pushing performance into gaming territory. VR applications are still more involved.

Keeping up with the complicated permutations of graphics chips is a specialization in itself. Fortunately, we have others doing that work for us.

Microsoft’s original Surface Book broke ground by offering the option of a dedicated NVIDIA GPU. Those have been in a multitude of notebook computers under a grand, but they’re still fairly novel in tablets. The dedicated GPU lives in the “dock” component of the machine – the bit with the keyboard – so you lose its functionality if you detach the tablet. But since you can operate the tablet flat in docked mode, that’s just fine.

The original GPU was a bit anemic – roughly equivalent to a 940M and with only 1GB of memory. That’ll be fine for some users, meaning the now steeply-discounted original model is worth a look if you’re just doing some light VJing (and it still I think bests the new 13″ MacBook Pro, which has no GPU at all).

But for more power, Microsoft has updated the GPU in the new Surface Book, which is probably the machine to consider first. As with the original, you’ll pay more for the dGPU model, but I think it’s well worth it.

Microsoft has added a Maxwell NVIDIA 965M. That’s a perfectly good, middle-of-the-road gaming GPU. It’ll cost you, to be sure. Microsoft’s price points here look a lot like Apple’s, in fact, running about $2800 with decent internal storage. But for that price, you get a full tablet touchscreen and pen input in comparison to Apple’s tiny touch strip. (Yes, yes – I know that the Touch Bar isn’t intended to replace multi-touch. But the value comparison remains if you’re doing visual work.)

Plus on Windows, you get an OS that is, frankly, more interesting, with support for Kinect 2 and apps like vvvv and TouchDesigner. If you just want bang-for-your-buck in the GPU category, or a faster GPU, there are substantially better options from MSI, Asus, Dell, Alienware, and the like – this would really be about the Surface Book’s hybrid tablet capabilities. (But that counts for something.)

It’s also interesting that Microsoft is naming the GPU in the specs this time. Perhaps a bit embarrassed by the weak chip in the first-generation model, technical data didn’t even call out the chip by name. And the 965M isn’t a custom chip, either; it’s an off-the-shelf NVIDIA GPU you see in other notebooks, so comparison is pretty easy.

In Apple’s corner, meanwhile, Apple gives you GPUs from AMD.

Ars Technica has some analysis
AMD reveals Radeon Pro 400 series GPU specs, as used in new MacBook Pro

Apple’s choices, while pricey, are more configurable than Microsoft’s. (That seems fitting, since Microsoft can point to the entire PC ecosystem if you want a different GPU – Apple is the only game in town.)

And this time you get not one but three choices on the MacBook Pro range:
Radeon Pro 450
Radeon Pro 455
Radeon Pro 460

These are a current generation chipset, too, whereas Microsoft has opted for the previous generation of their NVIDIA.

The Radeon Pro 950 is a newer architecture, but it’s roughly in the same ballpark as the NVIDIA 965M – strictly talking pipeline and bus width and so on. Actually, it’s new enough that it’s possible this graphics chipset, and not the development of the Touch Bar, was what determined the release date of this Mac generation.

If you’re willing to pay more, the Radeon Pro 460 starts to give you more serious 3D performance. But Apple simply doesn’t offer current desktop-generation performance. NVIDIA has started to cram that kind of power into laptop-ready chips.

I think it’s worth being critical of Apple for this – and this is why I have been repeatedly saying that Apple is ceding a big chunk of the pro market to Windows. At the very least, at least Apple used to offer desktop-class graphics performance in their desktop range. But with the iMac languishing and the Mac Pro literally never having seen any update of any kind, your only shot at a new GPU is actually this Radeon Pro 460, spending something like three grand to get it baked into a notebook computer.

There’s another problem, too. By exclusively choosing AMD as the vendor and not NVIDIA, you miss out on the interesting computational stuff being done via NVIDIA’s GPU-native processing (CUDA). That includes things like the crazy algorithmic work with machine learning.

It makes sense here to compare the Surface Book and MacBook Pro, in that they’re the flagship design showpieces of the two principal rivals in the market. Even making the comparison that narrow, though, I think there’s not much contest. The Surface Book is the more creative machine. Its GPU isn’t terribly impressive, but the choice of NVIDIA still opens up a lot of interesting experimentation. And you get touch and pen input right away. (The fact that the iMac Pro is an option doesn’t really encourage me, either – that says then maybe you just buy a cheap PC laptop running Windows and an iMac Pro, in place of the Surface Book!)

Don’t get me wrong: the AMD choice makes sense for doing what Apple clearly set out to do. Drive the main graphics apps, push lots of pixels to an external display, cost as little power and heat as possible.

The problem is, that middle-of-the-road choice knocks out a lot of more creative possibilities. And it’s expensive.

From there, you have a multitude of PC choices if you don’t crave the convertible form factor of the Microsoft offering. Oh yeah – and you get ports and physical function keys, too.

If it seems like I’m being hard on Apple, I’ll say this to conclude: it’s a very, very strange thing when suddenly artists and developers who have been loyal to macOS since the start are all telling you they’re shopping for a PC. I’m not being hyperbolic.

If you can get a significantly more powerful machine and pocket as much as $1000, well, that’s fairly compelling.

References:

AMD’s Polaris Architecture [MacBook Pro]

AMD Radeon Pro 450 [MacBook Pro]

Reddit discussion of the original Surface Book

Surface Book Tip: Manage the Discrete GPU [Thurrott.com on the original model, though still slightly relevant]

NVIDIA GeForce GTX 965M [the chip just added to discrete GPU models of the Surface Book]

The post Visualists, here’s the info on the GPU in the new Macs, Surface Books appeared first on CDM Create Digital Music.