How to make a multitrack recording in VCV Rack modular, free

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Big thanks to chaircrusher for this tip and some other ones that informed this article – do go check his music.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

(Alternatively, if you need extra features like sends, there’s the mscHack mixer, though it’s more complex and less attractive.)

Prep that signal. You might also consider a DC Offset and Compressor between your raw sources and the recording. (Thanks to Jim Aikin for that tip.)

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

https://ffmpeg.org/

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

https://www.sounddevices.com/products/accessories/software/wave-agent

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

Bonus tutorial here – the other thing apart from recording you’ll obviously want with VCV Rack is some hands-on control. Here’s a nice tutorial this week on working with BeatStep Pro from Arturia (also a favorite in the hardware modular world):

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.

https://vcvrack.com/

Previously:

Step one: How to start using VCV Rack, the free modular software

How to make the free VCV Rack modular work with Ableton Link

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

VCV Rack nears 1.0, new features, as software modular matures

VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.

Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.

Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.

New for end users

VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.

Here’s an overview:

  • Polyphony, polyphonic cables, polyphonic MIDI support and MPE
  • Multithreading and hardware acceleration
  • Tooltips, manual data entry, and right-click menus to more information on modules
  • Virtual CV to MIDI and direct MIDI mapping
  • 2.0 version coming with fully-integrated DAW plug-in

More on that:

Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:

https://community.vcvrack.com/t/how-polyphonic-cables-will-work-in-rack-v1/

New modules will help you manage this.

Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.

Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)

Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.

Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.

Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –

Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.

And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…

Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”

These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)

VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.

Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.

New for developers

Rack v1 is under a BSD license – proper free and open source software. There’s even a mission statement that deals with this.

Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.

https://vcvrack.com/manual/Migrate1.html

You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)

Plus check this port for adding polyphony to your stuff.

And in other Rack news…

Also worth mentioning:

While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.

https://community.vcvrack.com/

It’s powered by open source forum software Discourse.

For a bunch of newly free add-ons, check out the wonder XFX stuff (I paid for at least one of these, and would do again if they add more commercial stuff):

http://blamsoft.com/vcv-rack/

Vult is a favorite of mine, and there’s a great review this week, with 79 demo patches too:

There’s also a new version of Mutable Instruments Tides, Tidal Modular 2, available in the Audible Instruments Preview add-on – and 80% of your money goes to charity.

https://vcvrack.com/AudibleInstruments.html#preview

And oh yeah, remember that in the fall Rack already added support for hosting VST plugins, with VST Host. It will even work inside the forthcoming plugin, so you can host plugins inside a plugin.

https://vcvrack.com/Host.html

Here it is with the awesome d16 stuff, another of my addictions:

Great stuff. I’m looking forward to some quality patching time.

http://vcvrack.com/

The post VCV Rack nears 1.0, new features, as software modular matures appeared first on CDM Create Digital Music.

Build your own scratch DJ controller

If DJing originated in the creative miuse and appropriation of hardware, perhaps the next wave will come from DIYers inventing new approaches. No need to wait, anyway – you can try building this scratch controller yourself.

DJWORX has done some great ongoing coverage of Andy Tait aka Rasteri. You can read a complete overview of Andy’s SC1000, a Raspberry Pi-based project with metal touch platter:

Step aside portablism — the tiny SC1000 is here

In turn, there’s also that project’s cousin, the 7″ Portable Scratcher aka 7PS.

If you’re wondering what portablism is, that’s DJs carrying portable record players around. But maybe more to the point, if you can invent new gear that fits in a DJ booth, you can experiment with DJing in new ways. (Think how much current technique is really circumscribed by the feature set of CDJs, turntables, and fairly identical DJ software.)

Or to look at it another way, you can really treat the DJ device as a musical instrument – one you can still carry around easily.

The SC1000 in Rasteri’s capable hands is exciting just to behold:

Everything you need to build this yourself – or to discover the basis for other ideas – is up on GitHub:

https://github.com/rasteri/SC1000/

This is not a beginner project. But it’s not overwhelmingly complicated, either. Basically…

Ingredients:
Custom PCB
System-on-module (the brains of the operation)
SD card
Enclosure
Jog wheel with metal capacitive touch surface and magnet
Mini fader

Free software powers the actual DJing. (It’s based on xwax, open source Linux digital vinyl emulation, which we’ve seen as the basis of other DIY projects.)

Process:

You need to assemble the main PCB – there’s your soldering iron action.

And you’ll flash the firmware (which requires a PIC programmer), plus transfer the OS to SD card.

Assembly of the jog wheel and enclosure requires a little drilling and gluing

Other than that it’s a matter of testing and connection.

Build tutorial:

Full open source under a GPLv2 license. (Andy sort of left out the hardware license – this really sort of illustrates that GNU need a license that blankets both hardware and software, though that’s complex legally. There’s no copyright information on the hardware; to be fully open it needs something like a Creative Commons license on those elements of the designs. But that’s not a big deal.)

It looks really fantastic. I definitely want to try building one of these in Berlin – will team up and let you know how it goes.

This clearly isn’t for everyone. But the reason I mention going to custom hardware is, this means both that you can adapt your own technique to a particular instrument and you can modify the way the digital DJ tool responds if you so choose. It may take some time before we see that bear fruit, but it definitely holds some potential.

Via:
Rasteri’s SC1000 scratch controller — build your own today [thanks to Mark Settle over at DJWORX!]

Project page:
https://github.com/rasteri/SC1000/

Thanks, Dubby Labby!

The post Build your own scratch DJ controller appeared first on CDM Create Digital Music.

TidalCycles, free live coding environment for music, turns 1.0

Live coding environments are free, run on the cheapest hardware as well as the latest laptops, and offer new ways of thinking about music and sound that are leading a global movement. And one of the leading tools of that movement just hit a big milestone.

This isn’t just about a nerdy way of making music. TidalCycles is free, and tribes of people form around using it. Just as important as how impressive the tool may be, the results are spectacular and varied.

There are some people who take on live coding as their primary instrument – some who haven’t had experiencing using even computers or electronic music production tools before, let alone whole coding environments. But I think they’re worth a look even if you don’t envision yourself projecting code onstage as you type live. TidalCycles in particular had its origins not in computer science, but in creator Alex McLean’s research into rhythm and cycle. It’s a way of experiencing a musical idea as much as it is a particular tool.

TidalCycles has been one of the more popular tools, because it’s really easy to learn and musical. The one downside is a slightly convoluted install process, since it’s built on SuperCollider, as opposed to tools that now run in a Web browser. On the other hand, the payoff for that added work is you’ll never outgrow TidalCycles itself – because you can move to SuperCollider’s wider arrange of tools if you choose.

New in version 1.0 is a whole bunch of architectural improvement that really makes the environment feel mature. And there’s one major addition: controller input means you can play TidalCycles like an instrument, even without coding as your perform:
New functions
Updated innards
New ways of combining patterns
Input from live controllers
The ability to set tempo with patterns

Maybe just as important as the plumbing improvements, you also get expanded documentation and an all-new website.

Check out the full list of changes:

https://tidalcycles.org/index.php/Changes_in_Tidal_1.0.0

You’ll need to update some of your code as there’s been some renaming and so on.

But the ability to input OSC and MIDI is especially cool, not least because you can now “play” all the musical, rhythmic stuff TidalCycles does with patterns.

There’s enough musicality and sonic power in TidalCycles that it’s easy to imagine some people will take advantage of the live coding feedback as they create a patch, but play more in a conventional sense with controllers. I’ll be honest; I couldn’t quite wrap my head around typing code as the performance element in front of an audience. And that makes some sense; some people who aren’t comfortable playing actually find themselves more comfortable coding – and those people aren’t always programmers. Sometimes they’re non-programmers who find this an easier way to express themselves musically. Now, you can choose, or even combine the two approaches.

Also worth saying – TidalCycles has happened partly because of community contributions, but it’s also the work primarily of Alex himself. You can keep him doing this by “sending a coffee” – TidalCycles works on the old donationware model, even as the code itself is licensed free and open source. Do that here:

http://ko-fi.com/yaxulive#

While we’ve got your attention, let’s look at what you can actually do with TidalCycles. Here’s our friend Miri Kat with her new single out this week, the sounds developed in that environment. It’s an ethereal, organic trip (the single is also on Bandcamp):

We put out Miri’s album Pursuit last year, not really having anything to do with it being made in a livecoding environment so much as I was in love with the music – and a lot of listeners responded the same way:

For an extended live set, here’s Alex himself playing in November in Tokyo:

And Alexandra Cardenas, one of the more active members of the TidalCycles scene, played what looked like a mind-blowing set in Bogota recently. On visuals is Olivia Jack, who created vibrant, eye-searing goodness in the live coding visual environment of her own invention, Hydra. (Hydra works in the browser, so you can try it right now.)

Unfortunately there are only clips – you had to be there – but here’s a taste of what we’re all missing out on:

See also the longer history of Tidal

It’ll be great to see where people go next. If you haven’t tried it yet, you can dive in now:

https://tidalcycles.org/

Image at top: Alex, performing as part of our workshop/party Encoded in Berlin in June.

The post TidalCycles, free live coding environment for music, turns 1.0 appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Eerie, amazing sounds from tape loops, patches – like whales in space

Fahmi Mursyid from Indonesia has been creating oceans of wondrously sculpted sounds on netlabels for the past years. Be sure to watch these magical constructions on nothing but Walkman tape loops with effects pedals and VCV Rack patches – immense sonic drones from minimal materials.

Fahmi hails from Bandung, in West Java, Indonesia. While places like Yogyakarta have hogged the attention traditionally (back even to pre-colonial gamelan kingdom heydeys), it seems like Bandung has quietly become a haven for experimentalists.

He also makes gorgeous artworks and photography, which I’ve added here to visualize his work further. Via:

http://ideologikal.weebly.com/

This dude and his friends are absurdly prolific. But you can be ambitious and snap up the whole discography for about twelve bucks on Bandcamp. It’s all quality stuff, so you could load it up on a USB key and have music when you’re away from the Internet ranging from glitchy edges to gorgeous ambient chill.

Watching the YouTube videos gives you a feeling for the materiality of what you’re hearing – a kind of visual kinetic pcture to go with the sound sculpture. Here are some favorites of mine:

Via Bandcamp, he’s just shared this modded Walkman looping away. DSP, plug-in makers: here’s some serious nonlinearity to inspire you. Trippy, whalesong-in-wormhole stuff:

The quote added to YouTube from Steve Reich fits:

“the process of composition but rather pieces of music that are, literally, processes. The distinctive thing about musical processes is that they determine all the note-to-note (sound-to-sound) details and the overall form simultaneously. (Think of a round or infinite canon.)”

He’s been gradually building a technique around tapes.

But there’s an analog to this kind of process, working physically, and working virtually with unexpected, partially unstable modular creations. Working with the free and open source software modular platform VCV Rack, he’s created some wild ambient constructions:

Or the two together:

Eno and Reich pepper the cultural references, but there are aesthetic cues from Indonesia, too, I think (and no reason not to tear down those colonial divisions between the two spheres). Here’s a reinterpretation of Balinese culture of the 1940s, which gives you some texture of that background and also his own aesthetic slant on the music of his native country:

Check out the releases, too. These can get angular and percussive:

— or become expansive soundscapes, as here in collaboration with Sofia Gozali:

— or become deep, physical journeys, as with Jazlyn Melody (really love this one):

Here’s a wonderful live performance:

I got hooked on Fahmi’s music before, and … honestly, far from playing favorites, I find I keep accidentally running over it through aliases and different links and enjoying it over and over again. (While I was just in Indonesia for Nusasonic, it wasn’t the trip that made me discover the music – it was the work of musicians like Fahmi that were the reason we all found ourselves on the other side of the world in the first place, to be more accurate. They discovered new sounds, and us.) So previously:

The vaporwave Windows 98 startup sound remix no one asked for

http://ideologikal.weebly.com/

https://ideologikal.bandcamp.com/

The post Eerie, amazing sounds from tape loops, patches – like whales in space appeared first on CDM Create Digital Music.

Deep Synth combines a Game Boy and the THX sound

Do you love the THX Deep Note sound – that crazy sweep of timbres heard at the beginning of films? Do you wish you had it in a playable synth the size of a calculator? Deep Synth is for you.

First, Deep Note? Just to refresh your memory: (Turn it up!!)

Yeah, that.

Apart from being an all-time great in sound design, the Deep Note’s underlying synthesis approach was novel and interesting. And thanks to the power of new embedded processors, it’s totally possible to squeeze this onto a calculator.

Enter Eugene, Oregon-based professional developer Kernel Bob aka kbob. A low-level Linux coder by day, Bob got interested in making an audio demo for the 1Bitsy-1UP game console, a powerful modern embedded machine with the form factor of a classic Game Boy. (Unlike a Game Boy, you have a decent processor, color screen, USB, and SD card.)

The Deep Note is the mother of all audio demos. That sound is owned by THX, but the basic synthesis approach is not – think 32 voices drifting from a relatively random swarm into the seat rocking final chord.

The results? Oh, only the most insane synthesizer of the year:

Whether you’re an engineer or not, the behind the scenes discussion of how this was done is fascinating to anyone who loves synthesis. (Maybe you can enlighten Bob on this whole bit about the sawtooth oscillator in SuperCollider.)

Read the multi-part series on Deep Synth and sound on this handheld platform:

Deep Synth: Introduction

And to try messing about with Deep Note-style synthesis on your own in the free, multi-platform coding for musicians environment SuperCollider:

Recreating the THX Deep Note [earslap]

All of this is open hardware, open code, so if you are a coder, it might inspire your own projects. And meanwhile, as 1Bitsy-1UP matures, we may soon all have a cool handheld platform for our noisemaking endeavors. I can’t wait.

Thanks to Samantha Lüber for the tip!

Previously:

THX Just Remade the Deep Note Sound to be More Awesome

And we got to interview the sound’s creator (and talk to him about how he recreated it):

Q+A: How the THX Deep Note Creator Remade His Iconic Sound

The post Deep Synth combines a Game Boy and the THX sound appeared first on CDM Create Digital Music.