Apple to open source, cross-platform GPU tech: drop dead?

Apple’s decision to shift to its own proprietary tech for accessing modern GPUs could hurt research, education, and pro applications on their platform.

OpenGL and OpenCL are the industry-standard specifications for writing code that runs on graphics architectures, for graphics and general-purpose computation, including everything from video and 3D to machine learning.

This is relevant to an ongoing interest on this site – those technologies also enable live visuals (including for music), creative coding, immersive audiovisual performance, and “AI”-powered machine learning experiments in music and art.

OpenGL and OpenCL, while sometimes arcane technologies, enable a wide range of advanced, cross-platform software. They’re also joined by a new industry standard, Vulkan. Cross-platform code is growing, not shrinking, as artists, researchers, creative professionals, experimental coders, and other communities contribute new generations of software that work more seamlessly across operating systems.

And Apple has just quietly blown off all those groups. From the announcement to developers regarding macOS 10.14:

Deprecation of OpenGL and OpenCL

Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.

They’re also deprecating OpenGL ES on iOS, with the same logic.

Metal is fine technology, but it’s specific to iOS and Mac OS. It’s not open, and it won’t run on other platforms.

Describing OpenGL and OpenCL as “legacy” is indeed fine. But as usual, the issue with Apple is an absence of information, and that’s what’s problematic. Questions:

Does this mean OpenGL apps will stop working? This is actually the big question. “Deprecation” in the case of QuickTime did eventually mean Apple pulled support. But we don’t know if it means that here.

(One interesting angle for this is, it could be a sign of more Apple-made graphics hardware. On the other hand, OpenGL implementations were clearly a time suck – and Apple often lagged major OpenGL releases.)

What about support for Vulkan? Apple are a partner in the Khronos Group, which develops this industry-wide standard. It isn’t in fact “legacy,” and it’s designed to solve the same problems as Metal does. Is Metal being chosen over Vulkan?

Cook’s 2018 Apple seems to be far more interested in showcasing proprietary developer APIs. Compare the early Jobs era, which emphasized cross-platform standards (OpenGL included). Apple has an opportunity to put some weight behind Vulkan – if not at WWDC, fair enough, but at some other venue?

What happens on the Web? Cross-platform here is even more essential, since your 3D or machine learning code for a browser needs to work in multiple scenarios.

Transparency and information might well solve this, but for now we’re a bit short on both.

Metal support in Unity. Frameworks like Unity may be able to smooth out platform differences for developers (including artists).

A case for Apple pushing Metal

First off, there is some sense to Apple’s move here. Metal – like DirectX on Windows or Mantle from AMD – is a lower-level language for addressing the graphics hardware. That means less overhead, higher performance, and extra features. It suggests Apple is pushing their mobile platforms in particular as an option for higher-end games. We’ve seen gaming companies Razer and Asus create Android phones that have high-end specs on paper, but without a low-level API for graphics hardware or a significant installed base, those are more proof of concept than they are useful as game platform.

And Apple does love to deprecate APIs to force developers onto the newest stuff. That’s why so often your older OS versions are so quickly unsupported, even when developers don’t want to abandon you.

On mobile, Apple never implemented OpenCL in the first place. And there’s arguably a more significant gap between OpenGL ES and something like Metal for performance.

Another business case: Apple may be trying to drive a wedge in development between iOS and Android, to ensure more iOS-only games and the like. Since they can’t make platform exclusives the way something like a PlayStation or Nintendo Switch or Xbox can, this is one way to do it.

And it seems Apple is moving away from third-party hardware vendors, meaning they control both the spec here and the chips inside their devices.

But that doesn’t automatically make any of this more useful to end users and developers, who reap benefits from cross-platform support. It significantly increases the workload on Apple to develop APIs and graphics hardware – and to encourage enough development to keep up with competing ecosystems. So there’s a reason for standards to exist.

Vulkan offers some of the low-level advantages of Metal (or DirectX) … but it works cross-platform, even including Web contexts.

Pulling out of an industry standard group

The significant factor here about OpenGL generally is, it’s not software. It’s a specification for an API. And for the moment, it remains the industry standard specification for interfacing with the GPU. Unlike their move to embrace new variations of USB and Thunderbolt over the years, or indeed the company’s own efforts in the past to advance OpenGL, Apple isn’t proposing an alternative standard. They’re just pulling out of a standard the entire industry supports, without any replacement.

And this impacts a range of cross-platform software, open source software, and the ability to share code and research across operating systems, including but not limited to:

Video editing
Post production
Generative graphics
Digital art
VJing and live visual software
Creative coding
Machine learning and neural network tools

Cross platform portability for those use cases meets a significant set of needs. Educators wanting to teach how to write shaders now face having students with Apple hardware having to use a different language, for example. Gamers wanting access to the largest possible library – as on services like Steam – will now likely see more platform-exclusive titles instead on the Apple hardware. And pros wanting access to specific open source, high-end video tools… well, here’s yet another reason to switch to Windows or Linux.

This doesn’t so much impact developers who rely on existing libraries that target Metal specifically. So, for instance, developing in the Unity Game Engine means your creation can use Metal on Apple platforms and OpenGL elsewhere. But because of the size of the ecosystem here, that won’t be the case for a lot of other use cases.

And yeah, I’m serious about Linux as a player here. As Microsoft and Apple continue to emphasize consumers over pros, cramming huge updates over networks and trying to foist them on users, desktop Linux has quietly gotten a lot more stable. For pro video production, post production, 3D, rendering, machine learning, research, and – even a growing niche of people working in audio and music – Linux can simply out-perform its proprietary relatives and save money and time.

So what happened to Vulkan?

Apple could have joined with the rest of the industry in supporting a new low-level API for computation and graphics. That standard is now doubly important as machine learning technology drives new ideas across art, technology, and society.

https://www.khronos.org/vulkan/

And apart from the value of it being a standard, Apple would break with important hardware partners here at their own peril. Yes, Apple makes a lot of their own hardware under the hood – but not all of it. Will they also make a move to proprietary graphics chips on the Mac, and will those keep up with PC offerings? (There is currently a Vulkan SDK for Mac. It’s unclear exactly how it will evolve in the wake of this decision.)

ExtremeTech have a scathing review of the sitution. it’s a must-read, as it clearly breaks down the different pipelines and specs and how they work. But it also points out, Apple have tended to lag not just in hardware adoption but in their in-house support efforts. That suggests you get an advantage from being on Windows or Linux, generally:

Apple brings its Metal API to OS X 10.11, kicks Vulkan to the curb

Updated: Yes, of course you can run Molten, the latest OpenGL tech, atop Metal. In fact, here’s a demo from 2016. (Thanks, George Toledo!)

https://moltengl.com/moltenvk/

https://github.com/KhronosGroup/MoltenVK

That’s little comfort for larger range backwards compatibility with “legacy” OpenGL, but it does bode reasonably well for the future. And, you know … fish tornadoes.

Side note: that’s not just any fish tornado. The credit is to Robert Hodgin, the creative coding artists aka flight404 responsible for many, many generative demos over the years – including a classic iTunes visualizer.)

Fragmentation or standards

Let’s be clear – even with OpenGL and OpenCL, there’s loads of fragmentation in the fields I mention, from hardware to firmware to drivers to SDKs. Making stuff work everywhere is messy.

But users, researchers, and developers do reap real benefits from cross-platform standards and development. And Metal alone clearly doesn’t provide that.

Here’s my hope: I hope that while deprecating OpenGL/CL, Apple does invest in Vulkan and its existing membership in Khronos Group (the industry consortium that supports that API as well as OpenGL). Apple following up this announcement with some news on Vulkan and cross-platform support – and how the transition to that and Metal would work – could turn the mood around entirely.

Apple’s reputation may be proprietary, but this is also the company that pushed USB and Thunderbolt, POSIX and WebKit, that used a browser to sell its first phone, and that was a leading advocate (ironically) for OpenGL and OpenCL.

As game directors and artists and scientists and thinkers all explore the possibilities of new graphics hardware, from virtual reality to artificial intelligence, we have some real potential ahead. The platforms that will win I think will be the ones that maximize capabilities and minimize duplication of effort.

And today, at least, Apple are leaving a lot of those users in the dark about just how that future will work.

I’d love your feedback. I’m ranting here partly because I know a lot of the most interesting folks working on this are readers, so do please get in touch. You know more than I do, and I appreciate your insights.

More:

https://developer.apple.com/macos/whats-new/

https://www.khronos.org/opengl/wiki/FAQ

https://www.khronos.org/vulkan/

https://developer.apple.com/documentation/metalperformanceshaders

… and what this headline is referencing

The post Apple to open source, cross-platform GPU tech: drop dead? appeared first on CDM Create Digital Music.

Here’s what to learn to get a jump start on the new monome thing

SuperCollider? Lua? Huh? The latest creation from the makers of monome, norns, looks great. Here’s where to start learning the powerful sound engine underneath – which you can use on your PC or Mac right now, for free.

So far, from recommendations from the https://llllllll.co/t/approaching-norns/13236/”>thread introducing norns:

Supercollider tips, Q/A [thread on the monome forum]

The SuperCollider Book [a massive treeware tome from MIT Press – LinuxJournal have even done a review]

Learn Lua in 15 Minutes [the scripting engine that powers norns – but also a solid way to script SuperCollider in general]

Recommended tutorials for SuperCollider [from the source – and multiple languages]

Nick Collins’ tutorial

You may also want to check out simpler entry points into SuperCollider:

TidalCycles live coding environment [actually, this should also run on norns]

Sonic Pi

I’m sure there are other resources, so I’m just going to leave it there. Sound off if you’ve found a resource that helped you teach or learn.

The post Here’s what to learn to get a jump start on the new monome thing appeared first on CDM Create Digital Music.

Bela Mini gives you 1ms sound anywhere, to turn into anything, for £120

Make anything you want, with free music software of your choice, and <1ms latency. Bela is back, smaller than ever - a pocket-sized £120 computer for sound.

Embedded mobile tech has in recent years brought us pocket-sized, low-power boards that can match the performance of what not so many years ago we actually called a desktop computer. And that’s led to high-profile boards like the cheap Raspberry Pi. The problem has been, many of the cheapest of these machines were limited in computational power, and more importantly, had audio performance that ranged from middling to disastrously awful, both in audio quality and reliability/responsiveness.

But you shouldn’t settle for that. The whole point of building an embedded audio system dedicated to the task of music making – like a DIY effects pedal or synth or sound installation – ought to be that audio performance is better than on your PC. You’ve got a pocket-sized board that isn’t running weird file indexing, OS updates, buggy Facebook code open in twenty tabs, and the like. It ought to just do the number crunching you need for the granular delay you want to sing along with, and do it really well.

A few audio engineers have decided to brave the challenge. It’s not an easy thing to do: these little boards are so cheap that there’s not a whole lot of money to be made on them.

But one of the better projects has been Bela, first introduced in 2016. And today, its makers are taking advantage of a new board PocketBeagle board from beagleboard.org. It’s more powerful than that much-hyped Raspberry Pi, but runs on a battery and is absurdly small – the Bela Mini measures just 55x35x21mm. (Please do not eat your Bela Mini, or Tide Pods, or anything that isn’t food.)

It’s not just a small computer, though – there’s more.

Low latency. 1ms round-trip for audio, or a minuscule 100us round-trip via analog and digital I/Os.

Run your favorite free audio software. Support for the graphical patching environment Pure Data (Pd), the crazy-powerful code world of SuperCollider, plus C and C++, and community support for FAUST, Python, etc.

An IDE in your browser. Fire up your browser and use a built-in IDE with oscilloscope and spectral analysis and documentation and more.

Sensors! High-resolution sensor inputs onboard open up interesting interfacing with the real world, whether you’ve got a wearable technology idea, an interactive installation, or a unique custom interface.

The applications should be clear here. You could ditch your laptop and run a granular looper on a pocket-sized box. You could hook up some sensors and invent your own weird instrument. You could make a custom vocoder and bring this with a mic and croon along at “robot lounge night.” You could produce a runway show of electronically singing couture. You could devise a series of installations and turn into the next Nam June Paik and someday have a solo show at the Guggen– well, possibly at least some hipster gallery somewhere. You get the idea.

For now, that unique focus on audio makes this possibly the best game in town. There is one rival – the Pisound, a board that hops atop the Raspberry Pi, and couples with a custom case. The Pisound does have the advantage of onboard MIDI – both USB MIDI and MIDI DIN – but for computational power with audio, the Beagle looks stronger. (I could imagine doing an audio/MIDI application with Pisound and coupling it with an audio/sensor creation with Bela.)

https://blokas.io/

Bela winds up pricing out pretty nicely, too. The smart buy is a £120 all-in-one kit (£110 intro price through March 9). That gets you cables, the Bela, the PocketBeagle base board, and a pr-flashed SD-card. If you prefer to source your own parts, you can get just the Bela Mini for £60 (£55 intro).

Here’s what’s in the kit.

It’s bigger, but the original Bela has basically the same specs and ships now if what I’ve done is make you impatient to own one now, rather than wait for May.

Basically, what’s new on the Bela Mini is really the tiny size. That opens up projects where small size matters. (The Pisound above is really just about music projects, more than wearable tech and the like, by contrast – but of course by virtue of being larger affords more space for full-sized ports!) The original Bela will remain available, with “capelets” for adding additional features.

Either way, if you’re quick, you can get out of the studio and have your battery-powered box to make weird experimental music for your friends at the beach all summer long. (Or, southern hemisphere readers, let’s say keeping your friends warm with your July beatbox busking.)

And all for the price of one basic Eurorack module. Who said electronic music was just for the rich kids?

Full specs:

Based on the PocketBeagle (http://www.beagleboard.org/pocket) with a custom hardware cape and low-latency operating system
1GHz ARM Cortex-A8 processor, 512MB RAM (based on Octavo Systems OSD335x system-in-package)
Stereo audio I/O with integrated headphone amplifier (16 bit, 44.1kHz)
8x 16-bit analog inputs for sensors (DC-coupled; up to 44.1kHz for 4 inputs or 22.05kHz for 8 inputs)
16x digital I/Os (3.3V level)
USB host and device ports
Dimensions 55 x 35 x 21mm (including PocketBeagle)

Software:
Latency as low as 0.5ms (analog/digital input to audio output) or 1.0ms (audio input to audio output)
Browser-based IDE including oscilloscope, spectrum analyser, interactive pin diagram and onboard documentation
Support for C, C++, Pd and SuperCollider languages. Community-contributed support for FAUST, Python and others

Bela Mini launch + FAQ

Buy it:
https://shop.bela.io

Sample projects:
http://blog.bela.io/

Resources:
http://github.com/BelaPlatform
http://github.com/BelaPlatform/bela/wiki
http://forum.bela.io

The post Bela Mini gives you 1ms sound anywhere, to turn into anything, for £120 appeared first on CDM Create Digital Music.

Bela Mini gives you 1ms sound anywhere, to turn into anything, for £120

Make anything you want, with free music software of your choice, and <1ms latency. Bela is back, smaller than ever - a pocket-sized £120 computer for sound.

Embedded mobile tech has in recent years brought us pocket-sized, low-power boards that can match the performance of what not so many years ago we actually called a desktop computer. And that’s led to high-profile boards like the cheap Raspberry Pi. The problem has been, many of the cheapest of these machines were limited in computational power, and more importantly, had audio performance that ranged from middling to disastrously awful, both in audio quality and reliability/responsiveness.

But you shouldn’t settle for that. The whole point of building an embedded audio system dedicated to the task of music making – like a DIY effects pedal or synth or sound installation – ought to be that audio performance is better than on your PC. You’ve got a pocket-sized board that isn’t running weird file indexing, OS updates, buggy Facebook code open in twenty tabs, and the like. It ought to just do the number crunching you need for the granular delay you want to sing along with, and do it really well.

A few audio engineers have decided to brave the challenge. It’s not an easy thing to do: these little boards are so cheap that there’s not a whole lot of money to be made on them.

But one of the better projects has been Bela, first introduced in 2016. And today, its makers are taking advantage of a new board PocketBeagle board from beagleboard.org. It’s more powerful than that much-hyped Raspberry Pi, but runs on a battery and is absurdly small – the Bela Mini measures just 55x35x21mm. (Please do not eat your Bela Mini, or Tide Pods, or anything that isn’t food.)

It’s not just a small computer, though – there’s more.

Low latency. 1ms round-trip for audio, or a minuscule 100us round-trip via analog and digital I/Os.

Run your favorite free audio software. Support for the graphical patching environment Pure Data (Pd), the crazy-powerful code world of SuperCollider, plus C and C++, and community support for FAUST, Python, etc.

An IDE in your browser. Fire up your browser and use a built-in IDE with oscilloscope and spectral analysis and documentation and more.

Sensors! High-resolution sensor inputs onboard open up interesting interfacing with the real world, whether you’ve got a wearable technology idea, an interactive installation, or a unique custom interface.

The applications should be clear here. You could ditch your laptop and run a granular looper on a pocket-sized box. You could hook up some sensors and invent your own weird instrument. You could make a custom vocoder and bring this with a mic and croon along at “robot lounge night.” You could produce a runway show of electronically singing couture. You could devise a series of installations and turn into the next Nam June Paik and someday have a solo show at the Guggen– well, possibly at least some hipster gallery somewhere. You get the idea.

For now, that unique focus on audio makes this possibly the best game in town. There is one rival – the Pisound, a board that hops atop the Raspberry Pi, and couples with a custom case. The Pisound does have the advantage of onboard MIDI – both USB MIDI and MIDI DIN – but for computational power with audio, the Beagle looks stronger. (I could imagine doing an audio/MIDI application with Pisound and coupling it with an audio/sensor creation with Bela.)

https://blokas.io/

Bela winds up pricing out pretty nicely, too. The smart buy is a £120 all-in-one kit (£110 intro price through March 9). That gets you cables, the Bela, the PocketBeagle base board, and a pr-flashed SD-card. If you prefer to source your own parts, you can get just the Bela Mini for £60 (£55 intro).

Here’s what’s in the kit.

It’s bigger, but the original Bela has basically the same specs and ships now if what I’ve done is make you impatient to own one now, rather than wait for May.

Basically, what’s new on the Bela Mini is really the tiny size. That opens up projects where small size matters. (The Pisound above is really just about music projects, more than wearable tech and the like, by contrast – but of course by virtue of being larger affords more space for full-sized ports!) The original Bela will remain available, with “capelets” for adding additional features.

Either way, if you’re quick, you can get out of the studio and have your battery-powered box to make weird experimental music for your friends at the beach all summer long. (Or, southern hemisphere readers, let’s say keeping your friends warm with your July beatbox busking.)

And all for the price of one basic Eurorack module. Who said electronic music was just for the rich kids?

Full specs:

Based on the PocketBeagle (http://www.beagleboard.org/pocket) with a custom hardware cape and low-latency operating system
1GHz ARM Cortex-A8 processor, 512MB RAM (based on Octavo Systems OSD335x system-in-package)
Stereo audio I/O with integrated headphone amplifier (16 bit, 44.1kHz)
8x 16-bit analog inputs for sensors (DC-coupled; up to 44.1kHz for 4 inputs or 22.05kHz for 8 inputs)
16x digital I/Os (3.3V level)
USB host and device ports
Dimensions 55 x 35 x 21mm (including PocketBeagle)

Software:
Latency as low as 0.5ms (analog/digital input to audio output) or 1.0ms (audio input to audio output)
Browser-based IDE including oscilloscope, spectrum analyser, interactive pin diagram and onboard documentation
Support for C, C++, Pd and SuperCollider languages. Community-contributed support for FAUST, Python and others

Bela Mini launch + FAQ

Buy it:
https://shop.bela.io

Sample projects:
http://blog.bela.io/

Resources:
http://github.com/BelaPlatform
http://github.com/BelaPlatform/bela/wiki
http://forum.bela.io

The post Bela Mini gives you 1ms sound anywhere, to turn into anything, for £120 appeared first on CDM Create Digital Music.

Kickblast makes kick drum sounds for you, free, powered by Csound

There’s a classic fairy tale in which elves make shoes during the night for a shoemaker. Imagine that, but with kick drum sounds.

The last time we caught up with Micah Frank, he was sharing free software that generates rhythms for you:

Leave this free software running, and it’ll come up with rhythms for you

It’s all built using a classic free and open source software tool called Csound – a tool so rooted in digital music history, it has a direct lineage to the very first real computer music synthesis software created by Max Mathews back in 1957. That may seem archaic, but Csound remains simple, direct, and musical – which is how it has endured.

With Micah’s tool, you can set the software in motion and use your ears to choose what you like – going as deep (or not) as you want in the mechanics of those sounds. He writes:

Kickblast is a little tool I built for quickly generating electronic kick drums. It will create a variety of sounds from classic 909-esque sustained basses, to modular and even acoustic sounding kick drums. You can define the parameters and how many kicks you wish to generate. It also has offline rendering capabilities so you can instantly populate a folder full of 17 billion* kick drums if you like.

* if you attempt this quantity, please let me know how it works out

Here’s what you can expect as far as sounds:

How to get going:

Kickblast is a Csound program that populates a folder full of computer (Csound) generated bass drums.

github.com/chronopolis5k/Kickblast

1) All you need is Csound: csound.com/download.html CsoundQT comes with Csound and will enable you to run Kickblast.

2) Once installed, open the Kickblast.csd file and hit “Render” for offline file generation or “Run” for real-time.

3) You can define a number of parameters up in the top section, including how many kick drums you wish to generate.

4) The folder which contains the Kickblast.csd will become populated with your kick drums.

There’s no need to get bored with kick drums. Billions of possibilities await. Let us know if you make something you love.

The post Kickblast makes kick drum sounds for you, free, powered by Csound appeared first on CDM Create Digital Music.

Learning To Program With Python & A Video Synthesizer

Kirk Kaiser, author of Make Art With Python, shared this playlist of videos, looking at learning to program with python and the Critter & Guitar ETC video synthesizer. The Critter and Guitari ETC is an open source video synthesizer that runs Python and Pygame, and generates visuals meant to accompany a musical performance. It can… Read More Learning To Program With Python & A Video Synthesizer

Roland and MIT want to use music to teach kids programming

Millions of children worldwide use Scratch to enter the world of programming. Now there’s a new way to connect to music, as Roland teams up with MIT.

There’s a long, amazing history of teaching programming and creativity to kids. A lot of this legacy traces back to Cambridge and Wally Feurzeig, Seymour Papert, and Cynthia Solomon, with their late 60s introduction of the Logo programming language and accompanying Turtle Graphics, alongside a physical turtle robot. (Cynthia Solomon by the way has had an ongoing career contributing to this work and was one of the people instrumental in seeing this tool introduced to Apple’s 80s computer initiatives, which is how I grew up with it.)

If you understand topics like programming, logic – and machine learning, artificial intelligence, and related fields – as an extensive of how we think, then this is more than simply vocational prep. It’s not just making sure we have a generation of cheap coders, in other words. Learning programming, creativity, and media in this way can help how we think – so it’s really important.

Scratch is one of the latest to follow in these footsteps. It’s a free visual programming environment available on all operating systems and in 70+ (human) languages, built in its latest iteration with Web technologies. You can use it in a browser, and it has some surprisingly sophisticated interactive sprite and behavior capabilities, merging some of the best of past tools like Smalltalk, HyperCard, Director/Lingo, ActionScript, and others.

You know – for kids.

The GO_KEYS keyboard from Roland. Its price is a bit above the entry level (around $300). The main thought here is to reach new musicians by offering different ways of playing with loops and discovering music.

So now, where Roland comes in – now there’s an extension that lets you plug in a Roland GO:KEYS keyboard and use the GO:KEYS both as controller and sound source. Roland tell us “the SCRATCH X Extension combined with new firmware on the Roland GO:Keys allows for bi-directional communication via USB.”

You can program the GO:KEYS – and its musical capabilities – from Scratch. And you can control Scratch interactively using the keyboard’s notes and velocity, without any manual setup. So you can trigger animations or interactions from the keyboard, and Scratch can rely on GO:KEYS unique looping and sound generation facilities to add musical elements. Roland explains: “The GO:Keys Extension for SCRATCH X includes “blocks” which can select Loop Sets, play back specific patterns, determine the musical key, and so on.”

The SCRATCH X extension is the work of Roland; Scratch itself comes from the Lifelong Kindergarten Group at the MIT Media Lab.

Scratch programming interface with the new Roland module.

There’s some really cool potential here. HyperCard allowed kids (and adults) to create interactive storybooks and the like; with Scratch and GO:KEYS, you can imagine using keys to trigger story events, program logic creating musical events, and live control of music both from Scratch and the keyboard. Creative kids could turn this into a wild new instrument, complete with physical controls.

Now, of course, whether you specifically need the GO:KEYS for this or not is another matter. But it’s nice to see Roland even interested in this area. (And there’s an opportunity for the company to follow up with hardware loans and the like, and to work with other partners.) It’s also an excuse to look at this theme and where it could go.

Creative coding and teaching have long been a passion for me and this site, so I’ll be sure we follow up on this one!

GO:KEYS

scratch.mit.edu

The post Roland and MIT want to use music to teach kids programming appeared first on CDM Create Digital Music.

Have yourself some very procedural holidays in your browser

Holiday greeting cards: you can buy them in a supermarket. You can draw them in crayon. Or you can custom-code a generative interactive Web greeting.

Continuing something of a tradition on CDM, of course, we get to share the latter.

Robert Thomas writes with his creation. The musical component is probably the most interesting – and it shows off what can be done with free tools, including a Web sound engine that can read patches made with Pure Data.

Robert explains:

For a bit of holiday fun, here is a little interactive web based procedural audio collaboration I did with visual artist Matt Nish-Lapidus.

The audio is built in Pd and deployed through heavy into C then emscripten to JS and running in browser – fun!

Try it online:
http://emenel.ca/holiday2017/

And your family just sends a picture or a Web card. Wow.

If you want to have a go at creating something like this yourself, check:

Pure Data
Heavy (runs on a crazy number of platforms now; worth a second look!)
P5JS

Both audio and image represent open source projects that began life in conventional desktop environments, then were ported to versions that would run in browsers. On the sound side, that means Pd, the 90s-vintage cousin of Max/MSP, which saw an API-compatible(-ish) engine coded from scratch as Heavy. For visuals, you get p5.js, a JavaScript port of the Java-based Processing to JavaScript and the Web. The p5.js project began with the extraordinary LA artist/engineer Lauren McCarthy, but has since become an official target of the Processing API.

So, you have visual patching for musicians in Pd/Heavy for sound, and easy creative coding for artists in Processing/p5.js for picture.

Heavy has an interesting pricing model, too. You can use it commercially with the open license; you pay more for commercial support, closed-license projects, and the like. Processing relies on the Processing Foundation and users like you.

You know you’re a nerd when this is what you do on your holidays. Wouldn’t have it any other way. We’ll keep coming at you more or less right through the New Year’s.

The post Have yourself some very procedural holidays in your browser appeared first on CDM Create Digital Music.

$30 programmable, open Arduino ArduTouch synth is here

It’s $30. It can teach you how to code – or it can just be a fun, open synth. The ArduTouch by Mitch Altman is now shipping.

I wrote about ArduTouch earlier, with loads more on the instrument’s creator:
ArduTouch is an all-in-one Arduino synthesizer learning kit for $30

It’s a simple digital instrument based on the open source Arduino prototyping and coding platform, meaning it connects to an environment widely used by artists, hobbyists, and educators. Now Mitch shares that the product is available and shipping – and because this is an open source project, there’s a dump of new code, too.

And, I just uploaded the latest version of the ArduTouch Arduino sketches, including more way cool synthesizers, and a new Arduino library including more example synths (that also act as tutorials on how to create your own synthesizers).
https://github.com/maltman23/ArduTouch

Arduino-based synth projects have been here and there in some form back to the early days of Arduino. And of course Arduino as a platform is often a starting point into hardware development, even for students who have never written a line of code in their lives.

What’s cool about this is, you get a reliable platform on which to upload that code, and a touch interface and speaker so you can hear results. Plus, one of Mitch’s special superpowers has long been his ability to get others involved and to teach in an accessible way – so working through his code examples is a great experience.

This being Arduino, you can program over USB.

There are some really nice, musical ideas in there – like this is something that will make sense to musicians, not just to people who like mucking about with hardware. And since the code is out there, it could inspire other such projects, even on other platforms.

Proof that it makes noises – though, of course, you’re welcome to try and make noises you like!

I’m hoping to have one for my mini-winter-holiday break (uh, whichever winter holiday I manage to wrap that around… let’s hope not St. Patrick’s Day, but sooner!)

Have at it:

http://cornfieldelectronics.com/cfe/products/buy.php?productId=synth

The post $30 programmable, open Arduino ArduTouch synth is here appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.