Surge is free, deep synth for every platform, with MPE support

Surge is a deep multi-engine digital soft synth – beloved, then lost, then brought back to life as an open source project. And now it’s in a beta that’s usable and powerful and ready on every OS.

I wrote about Surge in the fall when it first hit a free, open source release:

Vember Audio owner @Kurasu made this happen. But software just “being open sourced” often leads nowhere. In this case, Surge has a robust community around it, turning this uniquely open instrument into something you can happily use as a plug-in alongside proprietary choices.

And it really is deep: stack 3 oscillators per voice, use morphable classic or FM or ring modulation or noise engines, route through a rich filter block with feedback and every kind of variation imaginable – even more exotic notch or comb or sample & hold choices, and then add loads of modulation. There are some 12 LFOs per voice, multiple effects, a vocoder, a rotary speaker…

I mention it again because now you can grab Mac (64-bit AU/VST), Windows (32-bit and 64-bit VST), and Linux (64-bit VST) versions, built for you.

And there’s VST3 support.

And there’s support for MPE (MIDI Polyphonic Expression), meaning you can use hardware from ROLI, Roger Linn, Haken, and others – I’m keen to try the Sensel Morph, perhaps with that Buchla overlay.

Now there’s also an analog mode for the envelopes, too.

This also holds great promise for people who desire a deep synth but can’t afford expensive hardware. While Apple’s approach means backwards compatibility on macOS is limited, it’ll run on fairly modest machines – meaning this could also be an ideal starting point for building your own integrated hardware/software solution.

In fact, if you’re not much of a coder but are a designer, it looks like design is what they need most at this point. Plus you can contribute sound content, too.

Most encouraging is really that they are trying to build a whole community around this synth – not just make open source maintenance a chore, but really a shared endeavor.

Check it out now:

https://surge-synthesizer.github.io

Previously:

Powerful SURGE synth for Mac and Windows is now free

The post Surge is free, deep synth for every platform, with MPE support appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

Synth One is a free, no-strings-attached, iPad and iPhone synthesizer

Call it the people’s iOS synth: Synth One is free – without ads or registration or anything like that – and loved. And now it’s reached 1.0, with iPad and iPhone support and some expert-designed sounds.

First off – if you’ve been wondering what happened to Ashley Elsdon, aka Palm Sounds and editor of our Apps section, he’s been on a sabbatical since September. We’ll be thinking soon about how best to feature his work on this site and how to integrate app coverage in the current landscape. But you can read his take on why AudioKit matters, and if Ashley says something is awesome, that counts.

But with lots of software synths out there, why does Synth One matter in 2019? Easy:

It’s really free. Okay, sure, it’s easy for Apple to “give away” software when they make more on their dongles and adapters than most app developers charge. But here’s an independent app that’s totally free, without needing you to join a mailing list or look at ads or log into some cloud service.

It’s a full-featured, balanced synth. Under the hood, Synth One is a polysynth with hybrid virtual analog / FM, with five oscillators, step sequencer, poly arpeggiator, loads of filtering and modulation, a rich reverb, multi-tap delay, and loads of etras.

There’s standards support up the wazoo. Are you visually impaired? There’s Voice Over accessibility. Want Ableton Link support? MIDI learn on everything? Compatibility with Audiobus 3 and Inter App Audio so you can run this in your favorite iOS DAW? You’re set.

It’s got some hot presets. Sound designer Francis Preve has been on fire lately, making presets for everyone from KORG to the popular Serum plug-in. And version 1.0 launches with Fran’s sound designs – just what you need to get going right away. (Fran’s sound designs are also usually great for learning how a synth works.)

It’s the flagship of an essential framework. Okay the above matters to users – this matters to developers (who make stuff users care about, naturally). Synth One is the synthesizer from the people who make AudioKit. That’s good for making sure the framework is solid, plus

You can check out the source code. Everything is up at github.com/AudioKit/AudioKitSynthOne – meaning Synth One is also an (incredibly sophisticated) example app for Audio Kit.

More is coming… MPE (MIDI Polyphonic Expression) and AUv3 are coming soon, say the developers.

And now the big addition —

It runs on iPhone, too. I have to say, I’ve been waiting for a synth that’s pocket sized for extreme portability, but few really are compelling. Now you can run this on any iPhone 6 or better – and if you’ve got a higher-end iPhone (iPhone X/XS/XR / iPhone XS Max / 6/7/8 Plus size), you’ll get a specially optimized UI with even more space.

Check out this nice UI:

On iPhone:

More:

AudioKit Synth One 1.0 arrives, is universal, is awesome

The post Synth One is a free, no-strings-attached, iPad and iPhone synthesizer appeared first on CDM Create Digital Music.

TidalCycles, free live coding environment for music, turns 1.0

Live coding environments are free, run on the cheapest hardware as well as the latest laptops, and offer new ways of thinking about music and sound that are leading a global movement. And one of the leading tools of that movement just hit a big milestone.

This isn’t just about a nerdy way of making music. TidalCycles is free, and tribes of people form around using it. Just as important as how impressive the tool may be, the results are spectacular and varied.

There are some people who take on live coding as their primary instrument – some who haven’t had experiencing using even computers or electronic music production tools before, let alone whole coding environments. But I think they’re worth a look even if you don’t envision yourself projecting code onstage as you type live. TidalCycles in particular had its origins not in computer science, but in creator Alex McLean’s research into rhythm and cycle. It’s a way of experiencing a musical idea as much as it is a particular tool.

TidalCycles has been one of the more popular tools, because it’s really easy to learn and musical. The one downside is a slightly convoluted install process, since it’s built on SuperCollider, as opposed to tools that now run in a Web browser. On the other hand, the payoff for that added work is you’ll never outgrow TidalCycles itself – because you can move to SuperCollider’s wider arrange of tools if you choose.

New in version 1.0 is a whole bunch of architectural improvement that really makes the environment feel mature. And there’s one major addition: controller input means you can play TidalCycles like an instrument, even without coding as your perform:
New functions
Updated innards
New ways of combining patterns
Input from live controllers
The ability to set tempo with patterns

Maybe just as important as the plumbing improvements, you also get expanded documentation and an all-new website.

Check out the full list of changes:

https://tidalcycles.org/index.php/Changes_in_Tidal_1.0.0

You’ll need to update some of your code as there’s been some renaming and so on.

But the ability to input OSC and MIDI is especially cool, not least because you can now “play” all the musical, rhythmic stuff TidalCycles does with patterns.

There’s enough musicality and sonic power in TidalCycles that it’s easy to imagine some people will take advantage of the live coding feedback as they create a patch, but play more in a conventional sense with controllers. I’ll be honest; I couldn’t quite wrap my head around typing code as the performance element in front of an audience. And that makes some sense; some people who aren’t comfortable playing actually find themselves more comfortable coding – and those people aren’t always programmers. Sometimes they’re non-programmers who find this an easier way to express themselves musically. Now, you can choose, or even combine the two approaches.

Also worth saying – TidalCycles has happened partly because of community contributions, but it’s also the work primarily of Alex himself. You can keep him doing this by “sending a coffee” – TidalCycles works on the old donationware model, even as the code itself is licensed free and open source. Do that here:

http://ko-fi.com/yaxulive#

While we’ve got your attention, let’s look at what you can actually do with TidalCycles. Here’s our friend Miri Kat with her new single out this week, the sounds developed in that environment. It’s an ethereal, organic trip (the single is also on Bandcamp):

We put out Miri’s album Pursuit last year, not really having anything to do with it being made in a livecoding environment so much as I was in love with the music – and a lot of listeners responded the same way:

For an extended live set, here’s Alex himself playing in November in Tokyo:

And Alexandra Cardenas, one of the more active members of the TidalCycles scene, played what looked like a mind-blowing set in Bogota recently. On visuals is Olivia Jack, who created vibrant, eye-searing goodness in the live coding visual environment of her own invention, Hydra. (Hydra works in the browser, so you can try it right now.)

Unfortunately there are only clips – you had to be there – but here’s a taste of what we’re all missing out on:

See also the longer history of Tidal

It’ll be great to see where people go next. If you haven’t tried it yet, you can dive in now:

https://tidalcycles.org/

Image at top: Alex, performing as part of our workshop/party Encoded in Berlin in June.

The post TidalCycles, free live coding environment for music, turns 1.0 appeared first on CDM Create Digital Music.

Check out these amazing DIY controllers people made with OpenDeck

You’ve got plenty of off-the-shelf controllers – but what if you want something that’s unique to you? OpenDeck is an affordable, young, Arduino-powered controller platform for DIYers, and it’s starting to produce some jaw-dropping results.

There was a time when you needed to build your own stuff to add custom controls to synths and computers, sourcing joysticks and knobs and buttons and whatnot yourself. Doepfer’s Pocket Electronic platform spawned tons of weird and wonderful stuff. But then a lot of people found they were satisfied with a growing assortment of off-the-shelf generic and software-specific controllers, including those from the likes of Ableton, Native Instruments, Novation, and Akai.

But a funny thing happened at the same time. Just as economies of scale and improved microcontroller and development platforms have aided big manufacturers in the intervening years, DIY platforms are getting smarter and easier, too.

Enter OpenDeck. It’s what you’d expect from a current generation platform for gear makers. It supports class-compliant MIDI over USB, but also runs standalone. You can configure it via Web interface. You can plug in buttons and encoders and pots and other inputs and LEDs – but also add displays. You have tons of I/O – 32-64 ins, and 48 outs. But it’s all based on the familiar, friendly Arduino platform – and runs on Arduino and Teensy boards in addition to a custom OpenDeck board.

You get an easy platform that supports all the I/O you need and isn’t hard to code – leaving you to focus on hardware. And it runs on an existing platform rather than forcing you to learn something new.

I’ll take a look at it soon. Because it’s built around MIDI, OpenDeck looks ideal for controller applications, though other solutions now address audio, too.

But platform aside, look how many cool things people are starting to build. With so many stage rigs getting standardized (yawn), it’s nice to see this sort of weird variety … and people who have serious craft. (At least the rest of us can sigh and wish we were this handy, right?)

Examples:

Bergamot is an all-custom touchscreen MIDI controller for DJing:

The very nice-looking OpenDeck custom board is US$149. But you can also load this on much cheaper Arduino boards if you want to give it a test drive or start prototyping before you spring for the full board – and you can even buy pre-configured Arduinos to save yourself some time. (Some of the other boards are also more form efficient if you’re willing to do some additional work designing a board around it.)

Sensimidia, for Croatian dub act “Homegrown Sound.”

Tannin and Ceylon, two MIDI controllers.

Morten Berthelsen built this Elektron Analog controller.

Elektron’s Octatrack gets a custom controller … and foot pedals, too. By Anthony Vogt.

OpenDeck also features open source firmware under a GPLv3 license.

GitHub project page including full feature set (lots of nice stuff)

Here’s the underlying platform itself:

OpenDeck’s own custom hardware – though if this is overkill, various Arduino/Teensy variants work, too.

Configuration via Web interface.

Project site:

https://shanteacontrols.com/

The post Check out these amazing DIY controllers people made with OpenDeck appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Powerful SURGE synth for Mac and Windows is now free

Vember Audio’s Surge synth could be an ideal choice for an older machine or a tight budget – with deep modulation and loads of wavetables, now free and open source.

And that really means open source: Surge gets a GPL v3 license, which could also make this the basis of other projects.

People are asking for this a lot – “just open source it.” But that can be a lot of work, often prohibitively so. So it’s impressive to see source code dumped on GitHub.

And Surge is a deep synth, even if last updated in 2008. You get an intensive modulation architecture, nearly 200 wavetables, and a bunch of effects (including vocoder and rotary speaker). Plus it’s already 64-bit, so even though it’s a decade old, it’ll play reasonably nicely on newer machines.

Inside the modulation engine.

Features:

General

Synthesis method: Subtractive hybrid
Each patch contain two ‘scenes’ which are separate instances of the entire synthesis engine (except effects) that can be used for layering or split patches.
Quick category-based patch-browser
Future proof, comes as both a 32 & 64-bit VST plugin (Windows PC)
Universal Binary for both VST and AU (Mac)

Factory sounds

1010 patches
183 wavetables

Oscillators

3 oscillators/voice
8 versatile oscillator algorithms: Classic, Sine, Wavetable, Window, FM2, FM3, S/H Noise and Audio-input
The classic oscillator is a morphable pulse/saw/dualsaw oscillator with a sub-oscillator and self-sync.
The FM2/FM3 oscillators consists of a 1 carrier with 2/3 modulators and various options.
Most algorithms (except FM2, FM3, Sine and Audio-input) offer up to 16-voice unison at the oscillator level.
Oscillator FM/ringmodulation
Most oscillator algorithms (except FM2/FM3) are strictly band-limited yet still cover the entire audible spectrum, delivering a clear punchy yet clean sound.
Noise generator with variable spectrum.

Filterblock

Two filter-units with arrangeable in 8 different configurations
Feedback loop (number of variations inside the parenthesis)
Available filter-algorithms: LP12 (3), LP24 (3), LP24L (1-4 poles), HP12 (3), HP24 (3), BP (4), Notch (2), Comb (4), S&H
Filters can self-oscillate (with excitation) and respond amazingly fast to cutoff frequency changes.
Waveshaper (5 shapes)

Modulation

12 LFO-units available to each voice (6 are running on each voice and 6 are shared for the scene)
DAHDSR envelope generators on every LFO-unit
7 deformable LFO-waveforms + 1 drawable/stepsequencer waveform
LFO1 allows envelope retriggering when used as stepsequencer
Extremely fast and flexible modulation routing. Almost every continuous parameter can be modulated.

Effects

8 effect units arranged as 2 inserts/scene, 2 sends and 2 master effects
10 top-quality algorithms: Delay, Reverb, Chorus, Phaser, EQ, Distortion, Conditioner (EQ, stereo-image control & limiter), Rotary speaker, Frequency shifter, Vocoder

http://vemberaudio.se/surge.php

Via Synthtopia.

The post Powerful SURGE synth for Mac and Windows is now free appeared first on CDM Create Digital Music.

Watch this $30 kit turn into all these other synthesizers

DIY guru Mitch Altman has been busy expanding ArduTouch, the $30 kit board he designed to teach synthesis and coding. And now you can turn it into a bunch of other synths – with some new videos to who you how that works.

You’ll need to do a little bit of tinkering to get this working – though for many, of course, that’ll be part of the fun. So you solder together the kit, which includes a capacitive touch keyboard (as found on instruments like the Stylophone) and speaker. That means once the soldering is done, you can make sounds. To upload different synth code, you need a programmer cable and some additional steps.

Where this gets interesting is that the ArduTouch is really an embedded computer – and what’s wonderful about computers is, they transform based on whatever code they’re running.

ArduTouch is descended from the Arduino project, which in turn was the embedded hardware coding answer to desktop creative coding environment Processing. And from Processing, there’s the idea of a “sketch” – a bit of code that represents a single idea. “Sketching” was vital as a concept to these projects as it implies doing something simpler and more elegant.

For synthesis, ArduTouch is collecting a set of its own sketches – simple, fun digital signal processing creations that can be uploaded to the board. You get a whole collection of these, including sketches that are meant to serve mainly as examples, so that over time you can learn DSP coding. (The sketches are mostly the creation of Mitch’s friend, Bill Alessi.) Because the ArduTouch itself is cloned from the Arduino UNO, it’s also fully compatible both with UNO boards and the Arduino coding environment.

Mitch has been uploading videos and descriptions (and adding new synths over time), so let’s check them out:

Thick is a Minimoog-like, playable monosynth.

Arpology is an “Eno-influenced” arpeggiator/synth combo with patterns, speed, major/minor key, pitch, and attack/decay controls, plus a J.S. Bach-style generative auto-play mode.

Beatitude is a drum machine with multiple parts and rhythm track creation, plus a live playable bass synth.

Mantra is a weird, exotic-sounding sequenced drone synth with pre-mapped scales. The description claims “it is almost impossible to play something that doesn’t sound good.” (I initially read that backwards!)

Xoid is raucous synth with frequency modulation, ratio, and XOR controls. Actually, this very example demonstrates just why ArduTouch is different – like, you’d probably not want to ship Xoid as a product or project on its own. But as a sketch – and something strange to play with – it’s totally great.

DuoPoly is also glitchy and weird, but represents more of a complete synth workstation – and it’s a grab-bag demo of all the platform can do. So you get Tremelo, Vibrato, Pitch Bend, Distortion Effects, Low Pass Filter, High Pass Filter, Preset songs/patches, LFOs, and other goodies, all crammed onto this little board.

There, they’ve made some different oddball preset songs, too:

Platinum hit, this one:

This one, it sounds like we hit a really tough cave level in Metroid:

Open source hardware, kits available for sale:

https://cornfieldelectronics.com/cfe/projects.php#ardutouch

https://github.com/maltman23/ArduTouch

The post Watch this $30 kit turn into all these other synthesizers appeared first on CDM Create Digital Music.

Watch an Ableton Live sequence made physical on the monome grid

The monome made history by transforming the virtual world of the computer into a low-fidelity grid of lights and buttons. But it’s no less magical today – especially in the hands of stretta.

Watch:

Matthew Davidson has been an innovative developer of patches for the monome since its early days. And that’s a principle innovation of the hardware: by reducing the “screen” to a minimal on/off grid, and lighting buttons independently from your input, the monome becomes a distillation of the ideas in a particular computer patch. Just like a fretboard or the black and white keys of a grand piano, a music box roll or the notes on a staff, it’s an abstraction of the music itself. And its simplicity is part of its power – a simplicity that a mouse and a high-definition color display lack.

Matthew is using some features the first-generation monome didn’t have – the varibright lights, and a recommended 128-format grid. But otherwise, this riffs on the original idea.

And remember last week when we covered Berkelee College of Music introducing study of electronic instruments? Well, Davidson has developed a whole series of these kind of clever inventions as a set of studies in grid performance.

That is, the choice of Bach is fitting. This is classical grid from a virtuoso, a Well-Tempered Monome if you like.

Check out the full gridlab collection:

https://github.com/stretta/gridlab

Previously:

What do you play? Berklee adds electronic digital instrument program

Updated: so what about other grids?

Via social media, Matthew Davidson elaborates on why this setup requires the monome – which still says a lot about the uniqueness of the monome design:

First up is 64 buttons versus 512. It’ll work on a 128 kinda, barely, but it is awkward. An implementation of a fold mode might make that useable.

Second is the protocol. The monome protocol provides the ability to update a quadrant with a simple, compact message. This is what is used to achieve the fluidity. If you want to update the entire grid of a Launchpad, you have to send 64 individual messages, one for each LED.

Lastly is the issue of MIDI devices and M4L. The monome uses serialosc to communicate. Because of this, a monome M4L device can send and receive MIDI data at the same time as sending a receiving button/led data.

[Reproduced with permission.]

Of course, if you have other DIY ideas here, we’d love to hear them!

The post Watch an Ableton Live sequence made physical on the monome grid appeared first on CDM Create Digital Music.