Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

Cherry Audio Voltage Modular: a full synth platform, open to developers

Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.

Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.

So, what’s special about Voltage Modular?

Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)

Each jack can pop up to reveal a multi.

It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.

It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.

There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.

There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.

Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.

Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.

Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.

For developers

So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:

Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)

Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.

Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.

Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.

Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.

Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:

Modulars gone soft

Stay tuned; I’ve got this for testing and will let you know how it goes.

https://cherryaudio.com/voltage-modular

https://cherryaudio.com/voltage-module-designer

The post Cherry Audio Voltage Modular: a full synth platform, open to developers appeared first on CDM Create Digital Music.

Inside Cypher2, and what could be a more expressive future for synths

For all the great sounds they can make, software synths eventually fit a repetitive mold: lots of knobs onscreen, simplistic keyboard controls when you actually play. ROLI’s Cypher2 could change that. Lead developer Angus chats with us about why.

Angus Hewlett has been in the plug-in synth game a while, having founded his own FXpansion, maker of various wonderful software instruments and drums. That London company is now part of another London company, fast-paced ROLI, and thus has a unique charge to make instruments that can exploit the additional control potential of ROLI’s controllers. The old MIDI model – note on, note off, and wheels and aftertouch that impact all notes at once – gives way to something that maps more of the synth’s sounds to the gestures you make with your hands.

So let’s nerd out with Angus a bit about what they’ve done with Cypher2, the new instrument. Background:

A soft synth that’s made to be played with futuristic, expressive control

Peter: Okay, Cypher2 is sounding terrific! Who made the demos and so on?

Angus: Demos – Rafael Szaban, Heen-Wah Wai, Rory Dow. Sound Design – Rory Dow, Mayur Maha, Lawrence King & Rafael Szaban

Can you tell us a little bit about what architecture lies under the hood here?

Sure – think of it as a multi-oscillator subtractive synth. Three oscillators with audio-rate intermodulation (FM, S&H, waveshape modulation and ring mod), each switchable between Saw and Sin cores. Then you’ve got two waveshapers (each with a selection of analogue circuit models and tone controls, and a couple of digital wavefolders), and two filters, each with a choice of five different analogue filter circuit models – two variations on the diode ladder type, OTA ladder, state variable, Sallen-Key – and a digital comb filter. Finally, you’ve got a polyphonic, twin stereo output amp stage which gives you a lot of control over how the signal hits the effects chain – for example, you can send just the attack of every note to the “A” chain and the sustain/release phase to the “B” chain, all manner of possibilities there.

Controlling all of that, you’ve got our most powerful TransMod yet. 16 assignable modulation slots, each with over a hundred possible sources to choose from, everything from basics like Velocity and LFO through to function processors, step sequencers, paraphonic mod sources and other exotics. Then there’s eight fixed-function mod slots to support the five dimensions of MPE control and the three performance macros. So 24 TransMods in total, three times as many as v1.

Okay, so Cypher2 is built around MPE, or MIDI Polyphonic Expression. For those readers just joining us, this is a development of the existing MIDI specification that standardizes additional control around polyphonic inputs – that is, instead of adding expression to the whole sound all at once, you can get control under each finger, which makes way more sense and is more fun to play. What does it mean to build a synth around MPE control? How did you think about that in designing it?

It’s all about giving the sound designers maximum possibility to create expressive sound, and to manage how their sound behaves across the instrument’s range. When you’re patching for a conventional synth, you really only need to think about pitch and velocity: does the sound play nicely across the keyboard. With 5D MPE sounds, sound designers start having to think more like a software engineer or a game world designer – there’s so many possibilities for how the player might interact with the sound, and they’ve got to have the tools to make it sound musical and believable across the whole range.

What this translates to in the specific case of Cypher2 is adapting our TransMod system (which is, at its heart, a sophisticated modulation matrix) to make it easy for sound designers to map the various MPE control inputs, via dynamically controllable transfer function curves, on to any and every parameter on the synth.

How does this relate to your past line of instruments?

Clearly, Cypher2 is a successor to the original Cypher which was one of the DCAM Synth Squad synths; it inherits many of the same functional upgrades that Strobe 2 gained over its predecessor a couple of years ago – the extended TransMod system, the effects engine, the Retina-friendly, scalable, skinnable GUI – but goes further, and builds on a lot of user and sound-designer feedback we had from Strobe2. So the modulation system is friendlier, the effects engine is more powerful, and it’s got a brand new and much more powerful step-sequencer and arpeggiator. In terms of its relationship to the original Cypher – the overall layout is similar, but the oscillator section has been upgraded with the sine cores and additional FM paths; the shaper section gains wavefolders and tone controls; the filters have six circuits to chose from, up from two in the original, so there’s a much wider range of tones available there; the envelopes give you more choice of curve responses; the LFOs each have a sub oscillator and quadrature outputs; and obviously there’s MPE as described above.

Of course, ROLI hope that folks will use this with their hardware, naturally. But since part of the beauty is that this is open on MPE, any interesting applications working with some other MPE hardware; have you tried it out on non-ROLI stuff (or with testers, etc.)?

Yes, we’ve tried it (with Linnstrument, mainly), and yes, it very much works – although with one caveat. Namely, MPE, as with MIDI, is a protocol which specifies how devices should talk to one another – but it doesn’t specify, at a higher level, what the interaction between the musician and their sound should feel like.

That’s a problem that I actually first encountered during the development of BFD2 in the mid-2000s: “MIDI Velocity 0-127” is adequate to specify the interaction between a basic keyboard and a sound module, and some of the more sophisticated stage controller boards (Kurzweil, etc.) have had velocity curves at least since the 90s. But as you increase the realism and resolution of the sounds – and BFD2 was the first time we really did so in software to the extent that it became a problem – it becomes apparent that MIDI doesn’t specify how velocity should map on to dB, or foot-pounds-per-second force equivalent, or any real-world units.

That’s tolerable for a keyboard, where a discerning user can set one range for the whole instrument, but when you’re dealing with a V-Drums kit with, potentially, ten or twelve pads, of different types, to set up, and little in the way of a standard curve to aim for, the process becomes cumbersome and off-putting for the end-user. What does “Velocity 72” actually mean from Manufacturer A’s snare drum controller, at a sensitivity setting B, via drum brain C triggering sample D?

Essentially, you run into something of an Uncanny Valley effect (a term from the world of movies / games where, as computer generated graphics moved from obviously artificial 8-bit pixel art to today’s motion-captured, super-sampled cinematic epics, paradoxically audiences would in some cases be less satisfied with the result). So it’s certainly a necessary step to get expressive hardware and software talking to one another – and MPE accomplishes that very nicely indeed – but it’s not sufficient to guarantee that a patch will result in a satisfactory, believable playing experience OOTB.

Some sound-synth-controller-player combinations will be fine, others may not quite live up to expectations, but right now I think it’s natural to expect that it may be a bit hit-and-miss. Feedback on this is something I’d like to actively encourage, we have a great dialogue with the other hardware vendors and are keen for to achieve a high standard of interoperation, but it’s a learning process for all involved.

Thanks, Angus! I’ll be playing with Cypher2 and seeing what I can do with it – but fascinating to hear this take on synths and control mapping. More food for thought.

https://fxpansion.com/products/cypher2/

http://roli.com/

The post Inside Cypher2, and what could be a more expressive future for synths appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

Apple to open source, cross-platform GPU tech: drop dead?

Apple’s decision to shift to its own proprietary tech for accessing modern GPUs could hurt research, education, and pro applications on their platform.

OpenGL and OpenCL are the industry-standard specifications for writing code that runs on graphics architectures, for graphics and general-purpose computation, including everything from video and 3D to machine learning.

This is relevant to an ongoing interest on this site – those technologies also enable live visuals (including for music), creative coding, immersive audiovisual performance, and “AI”-powered machine learning experiments in music and art.

OpenGL and OpenCL, while sometimes arcane technologies, enable a wide range of advanced, cross-platform software. They’re also joined by a new industry standard, Vulkan. Cross-platform code is growing, not shrinking, as artists, researchers, creative professionals, experimental coders, and other communities contribute new generations of software that work more seamlessly across operating systems.

And Apple has just quietly blown off all those groups. From the announcement to developers regarding macOS 10.14:

Deprecation of OpenGL and OpenCL

Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.

They’re also deprecating OpenGL ES on iOS, with the same logic.

Metal is fine technology, but it’s specific to iOS and Mac OS. It’s not open, and it won’t run on other platforms.

Describing OpenGL and OpenCL as “legacy” is indeed fine. But as usual, the issue with Apple is an absence of information, and that’s what’s problematic. Questions:

Does this mean OpenGL apps will stop working? This is actually the big question. “Deprecation” in the case of QuickTime did eventually mean Apple pulled support. But we don’t know if it means that here.

(One interesting angle for this is, it could be a sign of more Apple-made graphics hardware. On the other hand, OpenGL implementations were clearly a time suck – and Apple often lagged major OpenGL releases.)

What about support for Vulkan? Apple are a partner in the Khronos Group, which develops this industry-wide standard. It isn’t in fact “legacy,” and it’s designed to solve the same problems as Metal does. Is Metal being chosen over Vulkan?

Cook’s 2018 Apple seems to be far more interested in showcasing proprietary developer APIs. Compare the early Jobs era, which emphasized cross-platform standards (OpenGL included). Apple has an opportunity to put some weight behind Vulkan – if not at WWDC, fair enough, but at some other venue?

What happens on the Web? Cross-platform here is even more essential, since your 3D or machine learning code for a browser needs to work in multiple scenarios.

Transparency and information might well solve this, but for now we’re a bit short on both.

Metal support in Unity. Frameworks like Unity may be able to smooth out platform differences for developers (including artists).

A case for Apple pushing Metal

First off, there is some sense to Apple’s move here. Metal – like DirectX on Windows or Mantle from AMD – is a lower-level language for addressing the graphics hardware. That means less overhead, higher performance, and extra features. It suggests Apple is pushing their mobile platforms in particular as an option for higher-end games. We’ve seen gaming companies Razer and Asus create Android phones that have high-end specs on paper, but without a low-level API for graphics hardware or a significant installed base, those are more proof of concept than they are useful as game platform.

And Apple does love to deprecate APIs to force developers onto the newest stuff. That’s why so often your older OS versions are so quickly unsupported, even when developers don’t want to abandon you.

On mobile, Apple never implemented OpenCL in the first place. And there’s arguably a more significant gap between OpenGL ES and something like Metal for performance.

Another business case: Apple may be trying to drive a wedge in development between iOS and Android, to ensure more iOS-only games and the like. Since they can’t make platform exclusives the way something like a PlayStation or Nintendo Switch or Xbox can, this is one way to do it.

And it seems Apple is moving away from third-party hardware vendors, meaning they control both the spec here and the chips inside their devices.

But that doesn’t automatically make any of this more useful to end users and developers, who reap benefits from cross-platform support. It significantly increases the workload on Apple to develop APIs and graphics hardware – and to encourage enough development to keep up with competing ecosystems. So there’s a reason for standards to exist.

Vulkan offers some of the low-level advantages of Metal (or DirectX) … but it works cross-platform, even including Web contexts.

Pulling out of an industry standard group

The significant factor here about OpenGL generally is, it’s not software. It’s a specification for an API. And for the moment, it remains the industry standard specification for interfacing with the GPU. Unlike their move to embrace new variations of USB and Thunderbolt over the years, or indeed the company’s own efforts in the past to advance OpenGL, Apple isn’t proposing an alternative standard. They’re just pulling out of a standard the entire industry supports, without any replacement.

And this impacts a range of cross-platform software, open source software, and the ability to share code and research across operating systems, including but not limited to:

Video editing
Post production
Generative graphics
Digital art
VJing and live visual software
Creative coding
Machine learning and neural network tools

Cross platform portability for those use cases meets a significant set of needs. Educators wanting to teach how to write shaders now face having students with Apple hardware having to use a different language, for example. Gamers wanting access to the largest possible library – as on services like Steam – will now likely see more platform-exclusive titles instead on the Apple hardware. And pros wanting access to specific open source, high-end video tools… well, here’s yet another reason to switch to Windows or Linux.

This doesn’t so much impact developers who rely on existing libraries that target Metal specifically. So, for instance, developing in the Unity Game Engine means your creation can use Metal on Apple platforms and OpenGL elsewhere. But because of the size of the ecosystem here, that won’t be the case for a lot of other use cases.

And yeah, I’m serious about Linux as a player here. As Microsoft and Apple continue to emphasize consumers over pros, cramming huge updates over networks and trying to foist them on users, desktop Linux has quietly gotten a lot more stable. For pro video production, post production, 3D, rendering, machine learning, research, and – even a growing niche of people working in audio and music – Linux can simply out-perform its proprietary relatives and save money and time.

So what happened to Vulkan?

Apple could have joined with the rest of the industry in supporting a new low-level API for computation and graphics. That standard is now doubly important as machine learning technology drives new ideas across art, technology, and society.

https://www.khronos.org/vulkan/

And apart from the value of it being a standard, Apple would break with important hardware partners here at their own peril. Yes, Apple makes a lot of their own hardware under the hood – but not all of it. Will they also make a move to proprietary graphics chips on the Mac, and will those keep up with PC offerings? (There is currently a Vulkan SDK for Mac. It’s unclear exactly how it will evolve in the wake of this decision.)

ExtremeTech have a scathing review of the sitution. it’s a must-read, as it clearly breaks down the different pipelines and specs and how they work. But it also points out, Apple have tended to lag not just in hardware adoption but in their in-house support efforts. That suggests you get an advantage from being on Windows or Linux, generally:

Apple brings its Metal API to OS X 10.11, kicks Vulkan to the curb

Updated: Yes, of course you can run Molten, the latest OpenGL tech, atop Metal. In fact, here’s a demo from 2016. (Thanks, George Toledo!)

https://moltengl.com/moltenvk/

https://github.com/KhronosGroup/MoltenVK

That’s little comfort for larger range backwards compatibility with “legacy” OpenGL, but it does bode reasonably well for the future. And, you know … fish tornadoes.

Side note: that’s not just any fish tornado. The credit is to Robert Hodgin, the creative coding artists aka flight404 responsible for many, many generative demos over the years – including a classic iTunes visualizer.)

Fragmentation or standards

Let’s be clear – even with OpenGL and OpenCL, there’s loads of fragmentation in the fields I mention, from hardware to firmware to drivers to SDKs. Making stuff work everywhere is messy.

But users, researchers, and developers do reap real benefits from cross-platform standards and development. And Metal alone clearly doesn’t provide that.

Here’s my hope: I hope that while deprecating OpenGL/CL, Apple does invest in Vulkan and its existing membership in Khronos Group (the industry consortium that supports that API as well as OpenGL). Apple following up this announcement with some news on Vulkan and cross-platform support – and how the transition to that and Metal would work – could turn the mood around entirely.

Apple’s reputation may be proprietary, but this is also the company that pushed USB and Thunderbolt, POSIX and WebKit, that used a browser to sell its first phone, and that was a leading advocate (ironically) for OpenGL and OpenCL.

As game directors and artists and scientists and thinkers all explore the possibilities of new graphics hardware, from virtual reality to artificial intelligence, we have some real potential ahead. The platforms that will win I think will be the ones that maximize capabilities and minimize duplication of effort.

And today, at least, Apple are leaving a lot of those users in the dark about just how that future will work.

I’d love your feedback. I’m ranting here partly because I know a lot of the most interesting folks working on this are readers, so do please get in touch. You know more than I do, and I appreciate your insights.

More:

https://developer.apple.com/macos/whats-new/

https://www.khronos.org/opengl/wiki/FAQ

https://www.khronos.org/vulkan/

https://developer.apple.com/documentation/metalperformanceshaders

… and what this headline is referencing

The post Apple to open source, cross-platform GPU tech: drop dead? appeared first on CDM Create Digital Music.

Unreal game engine’s modular sound features explained: video

Unreal Engine may be built for games, but under the hood, it’s got a powerful audio, music, and modular synthesis engine. Its lead audio programmer explained this afternoon in a livestream from HQ.

Now a little history: back when I first met Aaron McLeran, he was at EA and working with Brian Eno and company on Spore. Generative music in games and dreams of real interactive audio engines to drive it have some history. As it happens, those conversations indirectly led us to create libpd. But that’s another story.

Aaron has led an effort to build real synthesis capabilities into Unreal. That could open a new generation of music and sound for games, enabling scores that are more responsive to action and scale better to immersive environments (including VR and AR). And it could mean that Unreal itself becomes a tool for art, even without a game per se, by giving creators access to a set of tools that handle a range of 3D visual and sound capabilities, plus live, responsive sound and music structures, on the cheap. (Getting started with Unreal is free.)

I’ll write about this more soon, but here’s what they cover in the video:

  • Submix graph and source rendering (that’s how your audio bits get mixed together)
  • Effects processing
  • Realtime synthesis (which is itself a modular environment)
  • Plugin extensions

Aaron is joined by Community Managers Tim Slager and Amanda Bott.

I’m just going to put this out there —

— and let you ask CDM some questions. (Or let us know if you’re using Unreal in your own work, as an artist, or as a sound designer or composer for games!)

Forum topic with the stream:

Unreal Engine Livestream – Unreal Audio: Features and Architecture – May 24 – Live from Epic HQ

The post Unreal game engine’s modular sound features explained: video appeared first on CDM Create Digital Music.

KORG are about to unveil their DIY Prologue boards for synth hacking

KORG’s analog flagship synth, introduced earlier this year, hinted at a tantalizing feature – open programmability. It seems we’re about to learn what that’s about.

Amidst some other teasers floating around in advance of Berlin’s Superbooth synth conference this week, the newly-birthed “KORG Analogue” account on Instagram showed us what the SDK looks like. It’s an actual dev board, which KORG seem to be just releasing to interested DIYers.

This should also mean we get to find out more about what KORG are actually offering. The open SDK promises the ability to program your own oscillators and modulation effects, taking advantage of the Prologue’s wavetable capabilities and deep modulation architecture, respectively. Here’s a look:

Now, whether that appeals to you or not, this also will mean a library of community-contributed hacks that any Prologue owners can enjoy.

I can’t think of anything quite like this in synth hardware. There have certainly been software-based solutions for making sounds and community libraries of mods and sounds before. But it’s pretty wild that one of the biggest synth manufacturers is taking what would normally be a developer board for internal use only, and pitching it to the synth community at large. It shows just how much the synth world has embraced its nerdier side. And presumably the notion here is, that nerdy side is palatable, not frightening, to musicians at large.

And why not? If this means the average Prologue owner can go to a website and download some new sounds, bring it on.

Curious if KORG will have anything else this week in Berlin. Looking forward to seeing them – stay tuned.

Korg Anaologue Team [Instagram]

The post KORG are about to unveil their DIY Prologue boards for synth hacking appeared first on CDM Create Digital Music.

This low-latency OS could change how music gear is made

You want the flexibility of PC software, but the performance of standalone gear? A new music OS is the latest effort to promise the best of both worlds.

Sure, analog gear is enjoying a happy renaissance – and that’s great. But a lot of the experimentation with sound production occurs with software (iOS or Windows or Mac) simply because it’s easier (and cheaper) to try things out on an Intel or ARM chip. (ARM is the architecture found in your iPhone or iPad or Android phone, among others; Intel you know.) Some manufacturers are already making the move to standalone hardware based on these architectures – at AES last year, I saw Eventide’s massive coming flagship, which is totally ARM-based. But they’re typically rolling their own operating system, which provides some serious expertise.

MIND Music Labs this month unveiled what they called ELK – a Linux-based operating system they say is optimized for musical applications and high performance.

That means they’re boldly going where… a lot of players have tried to go before. But this time, it’s different – really. First, there’s more demand on the developer side, as more makers have grown intrigued by off-the-shelf CPUs. And developer tools for these options are better than they’ve been. And hardware is cheaper, lower-power, and more accessible than ever, particularly as mobile devices have driven massive scale. (The whole world, sadly, may not really feel it needs an effects processor or guitar pedal, but a whole lot of the world now has smartphones.)

ELK promises insanely low latencies, so that you can add digital effects without delaying the returning signal (which for anything other than a huge reverb is an important factor). And there are other benefits, too, that make music gadgets made with the OS more connected to the world. According to the developers, you get:

Ultra-low latency (1ms round-trip)
Linux-based, using single Intel & ARM CPUs
Support for JUCE and VsT 2.x and 3.x plugins
Natively connected (USB, WiFi, BT, 4G)

That connectivity opens up possibilities like sharing music, grabbing updates and new sounds, and connecting to wireless instruments like the ROLI line. There’s full MIDI support, too, though – and, well, lots of other things you can do with Linux.

(JUCE is a popular framework for developing cross platforms, meaning you could make one really awesome granular synth and then run it on desktop, mobile, and this platform easily.)

Now, having done this for a while, I’ve seen a lot of claims like this come and go. But at least ELK last week was demonstrated with some actual gear as partners – DVMark, MarkBass, and Overloud (TH-U).

1ms latency claims don’t just involve the OS. Here, ELK delivers a complete hardware platform, so that’s the actual performance including their (high-quality, they say) audio converters and chip. That’s what stops you from just grabbing something like a Raspberry Pi and turning it into a great guitar pedal – you’re constrained by the audio fidelity and real-time performance of the chipset, whether the USB connection or onboard audio. Here, that promises to be solved for you out of the box.

DVMark’s “Smart Multiamp” was the first real product to show off the platform. Plugin Alliance and Brainworx have signed on, too, so don’t be surprised if you’re soon looking at a dedicated box that can replace your laptop – but also run all your plug-ins.

And that’s the larger vision here – eventually ELK has its own plug-in format, and you should be able to move your favorite plug-ins around to connected devices, and access those gadgets from Android and iOS, But unlike using a computer or iPad on its own, you don’t have to sweat software upgrades or poor audio performance or try to imagine a laptop or tablet is a good music interface live.

This leaves of course lots of questions about how they’ll realize this vision and more questions if you’re an interested developer or manufacturer. I’m hopeful that they take the Eurorack market as a model – or even look at independent plug-in and app developers – and embrace a model that supports imaginative one-person developers, too. (A whole lot of the best music software and module ideas alike have come from one- and two-person shops.)

I at least like their vision – and I’m sure they won’t be alone. Best line: “Whether your idea of music is to be shut in a studio that looks like the bridge of a Klingon cruiser or you are a minimalist that wants everything to sound exactly like in 1958, we think you will be surprised at just how much smartness is going to affect us as musicians.”

I’ll throw this out here for now and let you ask away, and then we can do a follow-up soon. Loads more info at their site:

https://www.mindmusiclabs.com/elk/

The post This low-latency OS could change how music gear is made appeared first on CDM Create Digital Music.

This 16-yo and her team built a $100 Oculus VR clone and free SDK

One sixteen year-old, a couple of her friends, and their math teacher have taken on Oculus with an open SDK – and you can build the VR headset for $100.

France’s Maxime Coutté writes up the project, which features her coding alongside optics and code from her best friends and algorithmic assistance from their math teacher (really). And there are a few advantages of their open approach, even if the hardware doesn’t look quite as svelte as commercial options.

1. There’s an open SDK, which for now gets you up and running quickly in Unity Game Engine.
https://github.com/relativty/fastVR-sdk

2. There’s an open API for communications between Unity and the VR headset – which also allows low-latency communication between the game engine and Arduino.
https://github.com/relativty/wrmhl

3. There’s a headset that’ll run you somewhere around $100, instead of several times that for similar options. And of course you’ll get the fun of building it. And it’s open.

That WRHML creation could be a great option for anyone adding real-time interfaces for Unity, including musical and audiovisual applications. And wow, does this ever beat fighting over the cool table at the cafeteria – Maxime writes:

I started programming when I was 13, thanks to my math teacher. Every Monday and Tuesday, my friends and I used to go to his classroom to learn and practice instead of having a meal at the cafeteria.

WRHML already looks useful, but if you want to build the headset, here you go:
How you can build your own VR headset for $100

You might even get parts for less. The basic ingredients: Arduino DUE, a display, an acceleromter/gyro, and a housing. Part of the cheapness is thanks to sourcing inexpensive displays from China directly (instead of buying a built product with its associated profit margin).

GitHub is your best source:

https://github.com/relativty/Relativ

Via T3n [German only]; h/t Martin Backes.

The post This 16-yo and her team built a $100 Oculus VR clone and free SDK appeared first on CDM Create Digital Music.

MusicMakers Hacklab Berlin to take on artificial minds as theme

AI is the buzzword on everyone’s lips these days. But how might musicians respond to themes of machine intelligence? That’s our topic in Berlin, 2018.

We’re calling this year’s theme “The Hacked Mind.” Inspired by AI and machine learning, we’re inviting artists to respond in the latest edition of our MusicMakers Hacklab hosted with CTM Festival in Berlin. In that collaborative environment, participants will have a chance to answer these questions however they like. They might harness machine learning to transform sound or create new instruments – or even answer ideas around machines and algorithms in other ways, through performance and composition ideas.

As always, the essential challenge isn’t just hacking code or circuits or art: it’s collaboration. By bringing together teams from diverse backgrounds and skill sets, we hope to exchange ideas and knowledge and build something new, together, on the spot.

The end result: a live performance at HAU2, capping off a dense week-plus festival of adventurous electronic music, art, and new ideas.

Hacklab application deadline: 05.12.2017
Hacklab runs: 29.1 – 4.2.2018 in Berlin (Friday opening, Monday – Saturday lab participation, Sunday presentation)

Apply online:
MusicMakers Hacklab – The Hacked Mind – Call for works

We’re not just looking for coders or hackers. We want artists from a range of backgrounds. We want people to wrestle with machine learning tools – absolutely, and some are specifically designed to train to recognize sounds and gestures and work with musical instruments. But we also hope for unorthodox artistic reactions to the topic and larger social implications.

To spur you on, we’ll have a packed lineup of guests, including Gene Kogan, who runs the amazing resource ml4a – machine learning for artists – and has done AV works like these:

And there’s Wesley Goatley, whose work delves into the hidden methods and biases behind machine learning techniques and what their implications might be.

Of course, machine learning and training on big data sets opens up new possibilities for musicians, too. Accusonus recently explained that to us in terms of new audio processing techniques. And tools like Wekinator now use training machines as ways of more intelligently recognizing gestures, so you can transform electronic instruments and how they’re played by humans.

Dog training. No, not like that – training your computer on dogs. From ml4a.

Meet Ioann Maria

We have as always a special guest facilitator joining me. This time, it’s Ioann Maria, whose AV / visual background will be familiar to CDM readers, but who has since entered a realm of specialization that fits perfectly with this year’s theme.

Ioann wrote a personal statement about her involvement, so you can get to know where she’s come from:

My trip into the digital started with real-time audiovisual performance. From there, I went on to study Computer Science and AI, and quickly got into fundamentals of Robotics. The main interest and focus of my studies was all that concerns human-machine interaction.

While I was learning about CS and AI, I was co-directing LPM [Live Performers Meeting], the world’s largest annual meeting dedicated to live video performance and new creative technologies. In that time I started attending Dorkbot Alba meet-ups – “people doing strange things with electricity.” From our regular gatherings arose an idea of opening the first Scottish hackerspace, Edinburgh Hacklab (in 2010 – still prospering today).

I grew up in the spirit of the open source.

For the past couple of years, I’ve been working at the Sussex Humanities Lab at the University of Sussex, England, as a Research Technician, Programmer, and Technologist in Digital Humanities. SHL is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

I provide technical expertise to researchers at the Lab and University.

At the SHL, I do software and hardware development for content-specific events and projects. I’ve been working on long-term jobs involving big data analysis and visualization, where my main focus for example was to develop data visualization tools looking for speech patterns and analyzing anomalies in criminal proceedings in the UK over the centuries.

I also touched on the technical possibilities and limitations of today’s conversational interfaces, learning more about natural language processing, speech recognition and machine learning.

There’s a lot going on in our Digital Humanities Lab at Sussex and I’m feeling lucky to have a chance to work with super brains I got to meet there.

In the past years, I dedicated my time speaking about the issues of digital privacy, computer security and promoting hacktivism. That too found its way to exist within the academic environment – in 2016 we started the Sussex Surveillance Group, a cross-university network that explores critical approaches to understanding the role and impact of surveillance techniques, their legislative oversight and systems of accountability in the countries that make up what are known as the ‘Five Eyes’ intelligence alliance.

With my background in new media arts and performance and some knowledge, in computing I’m awfully curious about what will happen during the MusicMakers Hacklab 2018.

What fascinating and sorrowful times we happen to live in. How will AI manifest and substantiate our potential, and how will we translate this whole weight and meaning into music, into performing art? It going to be us for, or against the machine? I can’t wait to meet our to-be-chosen Hacklab participants, link our brains and forces into a creative-tech-new – entirely IRL!

MusicMakers Hacklab – The Hacked Mind – Call for works

In collaboration with CTM Festival, CDM, and the SHAPE Platform.
With support from Native Instruments.

The post MusicMakers Hacklab Berlin to take on artificial minds as theme appeared first on CDM Create Digital Music.