KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers

KORG has already shown that opening up oscillators and effects to developers can expand their minilogue and prologue keyboards. But now they’re doing the same for the nutekt NTS-1 – a cute little volca-ish kit for synths and effects. Build it, make wild sounds, and … run future stuff on it, too.

Okay, first – even before you get to any of that, the NTS-1 is stupidly cool. It’s a little DIY kit you can snap together without any soldering. And it’s got a fun analog/digital architecture with oscillators, filter, envelope, arpeggiator, and effects.

Basically, if you imagine having a palm-sized, battery-powered synthesis studio, this is that.

Japan has already had access to the Nutekt brand from KORG, a DIY kit line. (Yeah, the rest of the world gets to be jealous of Japan again.) This is the first – and hopefully not the last – time KORG has opened up that brand name to the international scene.

And the NTS-1 is one we’re all going to want to get our hands on, I’ll bet. It’s full of features:

– 4 fixed oscillators (saw, triangle and square, loosely modeled around their analog counterpart in minilogue/prologue, and VPM, a simplified version of the multi-engine VPM oscillator)
– Multimode analog modeled filter with 2/4 pole modes (LP, BP, HP)
– Analog modeled amp. EG with ADSR (fixed DS), AHR, AR and looping AR
– modulation, delay and reverb effects on par with minilogue xd/prologue (subset of)
– arpeggiator with various modes: up, down, up-down, down-up, converge, diverge, conv-div, div-conv, random, stochastic (volca modular style). Chord selection: octaves, major triad, suspended triad, augmented triad, minor triad, diminished triad (since sensor only allows one note at a time). Pattern length: 1-24
– Also: pitch/Shape LFO, Cutoff sweeps, tremollo
– MIDI IN via 2.5mm adapter, USB-MIDI, SYNC in/out
– Audio input with multiple routing options and trim
– Internal speaker and headphone out

That would be fun enough, and we could stop here. But the NTS-1 is also built on the same developer board for the KORG minilogue and prologue keyboards. That SDK opens up developers’ powers to make their own oscillators, effects, and other ideas for KORG hardware. And it’s a big deal the cute little NTS-1 is now part of that picture, not just the (very nice) larger keyboards. I’d see it this way:

NTS-1 buyers can get access to the same custom effects and synths as if they bought the minilogue or prologue.

minilogue and prologue owners get another toy they can use – all three of them supporting new stuff.

Developers can use this inexpensive kit to start developing, and don’t have to buy a prologue or minilogue. (Hey, we’ve got to earn some cash first so we can go buy the other keyboard! Oh yeah I guess I have also rent and food and things to think about, too.)

And maybe most of all –

Developers have an even bigger market for the stuff they create.

This is still a prototype, so we’ll have to wait, and no definite details on pricing and availability.

Waiting.

Yep, still waiting.

Wow, I really want this thing, actually. Hope this wait isn’t long.

I’m in touch with KORG and the analog team’s extraordinary Etienne about the project, so stay tuned. For an understanding of the dev board itself (back when it was much less fun – just a board and no case or fun features):

KORG are about to unveil their DIY Prologue boards for synth hacking

Videos:

Sounds and stuff –

Interviews and demos –

And if you wondered what the Japanese kits are like – here you go:

Oh, and I’ll also say – the dev platform is working. Sinevibes‘ Artemiy Pavlov was on-hand to show off the amazing stuff he’s doing with oscillators for the KORG ‘logues. They sound the business, covering a rich range of wavetable and modeling goodness – and quickly made me want a ‘logue, which of course is the whole point. But he seems happy with this as a business, which demonstrates that we really are entering new eras of collaboration and creativity in hardware instruments. And that’s great. Artemiy, since I had almost zero time this month, I better come just hang out in Ukraine for extended nerd time minus distractions.

Artemiy is happily making sounds as colorful as that jacket. Check sinevibes.com.

The post KORG’s nutekt NTS-1 is a fun, little kit – and open to ‘logue developers appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Arturia’s new experimental synth – and Mutable Instruments’ role

It was only a matter of time before some of the craziness of the modular world came to desktop synths, too. Arturia’s new MicroFreak is a budget keyboard with a weird streak.

It’s also been the source of some confusion, because it in fact makes use of oscillators from open source hardware maker Mutable Instruments, but hang tight for an explanation there. (It’s not exactly the focus of this synth, but it is significant – and an interesting illustration of overlapping capabilities in the age of open source.)

$349 (299 EUR) – coming this spring.

Experimental features are making their way into the mainstream. Let’s count – and yeah, that product name MicroFreak fits:

A flat-panel metal touch keyboard (Buchla style), with poly aftertouch. (Doesn’t look like there’s MPE support, though, just poly aftertouch support?)

A matrix for modulation (something associated with synths like the ARP 2500).

Randomization features in the step sequencer – various functions along the top “spice” and “dice” and otherwise rearrange your patterns.

Oscillator features from Mutable Instruments’ open source Plaits engine – and modes like Karplus Strong (physical modeled strings/plucks), harmonic oscillators, and more exotic wavetables.

It’s still an Arturia design, no doubt – the digital oscillators get fed through an analog filter (this time the Oberheim SEM), and the preset storage and control knobs all look Arturia-like and more conventional. But it’s a blend between that and more leftfield hardware, in one very low-cost unit – $349 (299 EUR) this spring.

The resulting design looks a little like it was pieced together from different bits – an ornate keyboard versus a more staid gray body, plus four glaring traffic cone orange knob caps. But that price is terrific, especially considering a lot of modular cases start at that price – let alone what you’d need to even begin to approach these possibilities here.

And – the thinness is fantastic. It seems 2019 is a year of touch keyboards. Don Buchla would’ve been proud of us.

So let’s get back to the Mutable Instruments oscillators, which are one of the more interesting features here. We’ve confirmed that Mutable Instruments and founder/designer Emilie were not directly involved in the design, though she did sign off on the mention of the company name.

Mutable Instruments’ Plaits module code is available open source under an MIT license, so any manufacturer can pick it up and use it – even without asking, actually. That’s by design; Emilie tells us she intended widespread use. (An alternative for open source developers is to use “copyleft” licensing, which requires anyone reusing your stuff to release their source, as well. That would’ve been interesting – theoretically it would have meant Arturia would need to open source their additional oscillators and firmware. The GPLv3 license we’ve used on MeeBlip has this function, for example.)

Some of Arturia’s original copy was perhaps a bit overzealous and caused some confusion about whether Mutable Instruments was a partner on the design. They’ve since clarified that. For further clarification, read the statement on the Mutable forums:

So while it’s not a collaboration, it does show off the power of open source. As Émilie writes:

You can find Mutable Instruments’ DSP code in the Korg Prologue, the Axoloti, the Organelle, VCV Rack, and plenty of other bits of software or hardware. This is not stealing. Plaits’ code is a summary of everything I’ve learnt about making rich and balanced sound sources controlled by a few parameters, it’s for everyone to enjoy.

The important thing here is to differentiate between the open source Plaits modules, some new additions from Arturia, and then the Plaits sounds you get from Mutable’s updated modules. Let’s break it down:

Plaits oscillator modes:

  • VA, classic virtual analog
  • Waveshaper, triangle wave with waveshaping / wavefolding
  • FM (2-operator FM oscillator)
  • Grain, granular synthesis
  • Chords, fixed paraphonic harmonies (hello, trance music, then)
  • Speech synthesis
  • Modal (inharmonic physical model)

Those of us who have been playing with this on hardware or in the authorized versions inside VCV Rack will definitely appreciate seeing these elsewhere. (Really – can’t get enough.)

Arturia did add some pretty significant modes to those:

  • “Superwave” – detuned saw, square, sine, triangle waves, somewhat Roland-ish sound
  • “Harmo” – 32 sine waves for additive synthesis
  • Karplus Strong – physical string modeling
  • Wavetable – scan through wavetable modes

To me, those Arturia additions really anchor this offering, with some pretty fundamental ideas on offer. Put them together, and you should have something really versatile.

But okay, since Mutable Instruments doesn’t get any of your money when you buy the Arturia MicroFreak, did Mutable just give away the store by using an open source license? Well, no, not really – Plaits gives you a full 16 modes, an internal low pass gate, and does all its 32-bit floating point math in hardware that you can bolt into a modular case and interconnect via control voltage. Plus, you can get Plaits in software if you like – see the Audible Instruments Preview for VCV Rack, regularly updated.

Heck, that could compel us Mutable superfans into happily buying these same features multiple times, in Arturia’s hardware, in the pack for VCV Rack (which Mutable has elected to support charity), and in Mutable’s own hardware. Hmmm… a MicroBrute, a little skiff with some Mutable modules, a nice connection to the laptop, maybe again a Raspberry Pi. Okay, I’ll stop. Guess I’ll have to buy the White Album again…)

See:
https://mutable-instruments.net/modules/plaits/

And as for MicroFreak:

https://www.arturia.com/products/hardware-synths/microfreak/overview

The post Arturia’s new experimental synth – and Mutable Instruments’ role appeared first on CDM Create Digital Music.

TidalCycles, free live coding environment for music, turns 1.0

Live coding environments are free, run on the cheapest hardware as well as the latest laptops, and offer new ways of thinking about music and sound that are leading a global movement. And one of the leading tools of that movement just hit a big milestone.

This isn’t just about a nerdy way of making music. TidalCycles is free, and tribes of people form around using it. Just as important as how impressive the tool may be, the results are spectacular and varied.

There are some people who take on live coding as their primary instrument – some who haven’t had experiencing using even computers or electronic music production tools before, let alone whole coding environments. But I think they’re worth a look even if you don’t envision yourself projecting code onstage as you type live. TidalCycles in particular had its origins not in computer science, but in creator Alex McLean’s research into rhythm and cycle. It’s a way of experiencing a musical idea as much as it is a particular tool.

TidalCycles has been one of the more popular tools, because it’s really easy to learn and musical. The one downside is a slightly convoluted install process, since it’s built on SuperCollider, as opposed to tools that now run in a Web browser. On the other hand, the payoff for that added work is you’ll never outgrow TidalCycles itself – because you can move to SuperCollider’s wider arrange of tools if you choose.

New in version 1.0 is a whole bunch of architectural improvement that really makes the environment feel mature. And there’s one major addition: controller input means you can play TidalCycles like an instrument, even without coding as your perform:
New functions
Updated innards
New ways of combining patterns
Input from live controllers
The ability to set tempo with patterns

Maybe just as important as the plumbing improvements, you also get expanded documentation and an all-new website.

Check out the full list of changes:

https://tidalcycles.org/index.php/Changes_in_Tidal_1.0.0

You’ll need to update some of your code as there’s been some renaming and so on.

But the ability to input OSC and MIDI is especially cool, not least because you can now “play” all the musical, rhythmic stuff TidalCycles does with patterns.

There’s enough musicality and sonic power in TidalCycles that it’s easy to imagine some people will take advantage of the live coding feedback as they create a patch, but play more in a conventional sense with controllers. I’ll be honest; I couldn’t quite wrap my head around typing code as the performance element in front of an audience. And that makes some sense; some people who aren’t comfortable playing actually find themselves more comfortable coding – and those people aren’t always programmers. Sometimes they’re non-programmers who find this an easier way to express themselves musically. Now, you can choose, or even combine the two approaches.

Also worth saying – TidalCycles has happened partly because of community contributions, but it’s also the work primarily of Alex himself. You can keep him doing this by “sending a coffee” – TidalCycles works on the old donationware model, even as the code itself is licensed free and open source. Do that here:

http://ko-fi.com/yaxulive#

While we’ve got your attention, let’s look at what you can actually do with TidalCycles. Here’s our friend Miri Kat with her new single out this week, the sounds developed in that environment. It’s an ethereal, organic trip (the single is also on Bandcamp):

We put out Miri’s album Pursuit last year, not really having anything to do with it being made in a livecoding environment so much as I was in love with the music – and a lot of listeners responded the same way:

For an extended live set, here’s Alex himself playing in November in Tokyo:

And Alexandra Cardenas, one of the more active members of the TidalCycles scene, played what looked like a mind-blowing set in Bogota recently. On visuals is Olivia Jack, who created vibrant, eye-searing goodness in the live coding visual environment of her own invention, Hydra. (Hydra works in the browser, so you can try it right now.)

Unfortunately there are only clips – you had to be there – but here’s a taste of what we’re all missing out on:

See also the longer history of Tidal

It’ll be great to see where people go next. If you haven’t tried it yet, you can dive in now:

https://tidalcycles.org/

Image at top: Alex, performing as part of our workshop/party Encoded in Berlin in June.

The post TidalCycles, free live coding environment for music, turns 1.0 appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Deep Synth combines a Game Boy and the THX sound

Do you love the THX Deep Note sound – that crazy sweep of timbres heard at the beginning of films? Do you wish you had it in a playable synth the size of a calculator? Deep Synth is for you.

First, Deep Note? Just to refresh your memory: (Turn it up!!)

Yeah, that.

Apart from being an all-time great in sound design, the Deep Note’s underlying synthesis approach was novel and interesting. And thanks to the power of new embedded processors, it’s totally possible to squeeze this onto a calculator.

Enter Eugene, Oregon-based professional developer Kernel Bob aka kbob. A low-level Linux coder by day, Bob got interested in making an audio demo for the 1Bitsy-1UP game console, a powerful modern embedded machine with the form factor of a classic Game Boy. (Unlike a Game Boy, you have a decent processor, color screen, USB, and SD card.)

The Deep Note is the mother of all audio demos. That sound is owned by THX, but the basic synthesis approach is not – think 32 voices drifting from a relatively random swarm into the seat rocking final chord.

The results? Oh, only the most insane synthesizer of the year:

Whether you’re an engineer or not, the behind the scenes discussion of how this was done is fascinating to anyone who loves synthesis. (Maybe you can enlighten Bob on this whole bit about the sawtooth oscillator in SuperCollider.)

Read the multi-part series on Deep Synth and sound on this handheld platform:

Deep Synth: Introduction

And to try messing about with Deep Note-style synthesis on your own in the free, multi-platform coding for musicians environment SuperCollider:

Recreating the THX Deep Note [earslap]

All of this is open hardware, open code, so if you are a coder, it might inspire your own projects. And meanwhile, as 1Bitsy-1UP matures, we may soon all have a cool handheld platform for our noisemaking endeavors. I can’t wait.

Thanks to Samantha Lüber for the tip!

Previously:

THX Just Remade the Deep Note Sound to be More Awesome

And we got to interview the sound’s creator (and talk to him about how he recreated it):

Q+A: How the THX Deep Note Creator Remade His Iconic Sound

The post Deep Synth combines a Game Boy and the THX sound appeared first on CDM Create Digital Music.

Hack a Launchpad Pro into a 16-channel step sequencer, free

Novation’s Launchpad Pro is unique among controller hardware: not only does it operate in standalone mode, but it has an easy-to-modify, open source firmware. This mod lets you exploit that to transform it into a 32-step sequencer.

French musician and engineer Quentin Lamerand writes us to share his mod for Novation’s firmware. And you don’t have to be a coder to use this – you can easily install it without any coding background, which was part of the idea of opening up the firmware in the first place.

The project looks really useful. You get 16 channels (for controlling multiple sound parts or devices), plus 32-steps for longer phrases. And since the Launchpad Pro works as standalone hardware, you could use all of this without a computer. (You can output notes on either the USB port – even in standalone mode – or the MIDI DIN out port.)

You’ll need something else to supply clock – the sequencer only works in slave mode – but once you do that (hihi, drum machine), you’re good to go.

Bonus features:

  • Note input with velocity (adjustable using aftertouch on the pads)
  • Repeat notes
  • Adjustable octave
  • Setup mode with track selection, parameters, mute, clear, and MIDI thru toggle
  • Tap steps to select track length
  • Adjust step length (to 32nd, 16th, 16th note triplet, 8th, 8th note triplet, quarter, quarter note triplet, half note)
  • Rotate steps

On one hand, this is what I think most of us believe Novation should have shipped in the first place. On the other hand, look at some of those power-user features – by opening up the firmware, we get some extras the manufacturer probably wouldn’t have added. And if you are handy with some simple code, you can modify this further to get it exactly how you want.

It’s a shame, actually, that we haven’t seen more hackable tools like this. But that’s all the more reason to go grab this – especially as Launchpads Pro can be had on the cheap. (Time to dust mine off, which was the other beauty of this project!)

Go try Quentin’s work and let us know what you think:

http://faqtor.fr/launchpadpro.html

Got some hacks of your own, or inspired by this to give it a try? Definitely give a shout.

The open firmware project you’ll find on Novation’s GitHub:

https://github.com/dvhdr/launchpad-pro

More:

Hack a Grid: Novation Makes Launchpad Pro Firmware Open Source

Launchpad Pro Grid Controller: Hands-on Comprehensive Guide

The post Hack a Launchpad Pro into a 16-channel step sequencer, free appeared first on CDM Create Digital Music.