One big, open standalone grid for playing everything: dadamachines composer pro

Various devices have tried to do what the computer does – letting you play, sequence, and clock other instruments, and arrange and recall ideas. Now, a new grid is in town, and it’s bigger, more capable, truly standalone, and open in every way.

composer pro makes its debut today at Superbooth. It comes from what may seem an unexpected source – dadamachines, the small Berlin-based maker known for making a plug-in-play toolkit for robotic percussion and, more recently, a clever developer board. But there’s serious engineering and musical experience informing the project.

What you get is an enormous, colored grid with triggers and display, and connectivity – wired and wireless – to other hardware. From this one device, you can then compose, connect, and perform. It’s a sequencer for outboard gear, but it’s also capable of playing internal sounds and effects.

It’s a MIDI router, a USB host, a sampler and standalone instrument, and a hub to clock everything else. It doesn’t need a computer – and yeah, it can definitely replace that laptop if you want, or keep it connected and synced via cable or Ableton Link.

And one more thing – while big manufacturers are starting to wake up to this sort of thing being a product category, composer pro is also open source and oriented toward communities of small makers and patchers who have been working on this problem. So out of the box, it’s set up to play Pure Data, SuperCollider, and other DIY instruments and effects, extending ideas for standalone instrument/effects developed by the likes of monome and Critter & Guitari’s Organelle. That should be significant both if you’re that sort of builder/hacker/patcher yourself, or even if you just want to benefit from their creations in your own music. And it’s in contrast to the proprietary direction most hardware has gone in recent years. It’s open to ideas and to working together on how to play – which is how grid performance got started in the first place.

Disclosure: I’m working with dadamachines as an advisor/coach. That also means I’ll be responsible for getting feedback to them – curious what you think. (And yeah, I also have some ideas and desires for where these sorts of capabilities could lead in the future. As a lot of you have, I’ve dreamt of electronic musical performance tools moving in this direction – I love computers but also hate some of the struggles they’ve brought with them.)

The hardware

I hope you like buttons. Composer Pro has a 192-pad grid – that’s 16 horizontally by 12 vertically. Add in the rest of the triggers for a grad total of 261 buttons – transport and modes on the top, and the usual scene and arm triggers on the side, plus edit controls and other functions on the left.

For continuous control, there’s a touch strip. And you get a small display and encoder so you can navigate and see what you’re doing.

There’s computational power inside, too – a Raspberry Pi compute module, and additional processing power that runs the device.

Connections

You get just about every form of connectivity (apart from CV/gate, even though this is Superbooth):

Sequencing and clock:

MIDI (via standard DIN connectors, 2 in, 2 out)
DIN sync (for vintage analog gear like the Roland TR-808)
Analog sync I/O (for other analog gear and modular)
USB MIDI (via USB C, for a computer)
USB host, with a 4-port USB hub
Ableton Link (for wireless connections, including to various Mac, Windows, Linux, and iOS software)
Footswitch jack

(There’s a dongle for wifi for Link support.)

Audio:

Headphone jack
Stereo audio in
Stereo audio out

The USB host and 4-port hub is a really big deal. It means you can do the things that normally require a computer – connect other interfaces, add more audio I/O, add USB MIDI keyboards and controllers, whatever.

Sequences and songs

At its heart, composer pro focuses on sequencing – whether you want to work with custom internal instruments, external gear, or both.

You have sixteen slots, which dadamachines dubs “Machines.” Then, you can work with simple step-sequenced rhythms or mono-/polyphonic melodies, and add automation of parameters (via MIDI CC).

Pattern sequences can be up to 16 bars.

There are 12 patterns per Machine slot. (16×12 – get it?)

Patterns + Machines = larger songs. And you can have as many songs as you can fit on an SD card (which, given this is MIDI data is … a lot).

The beauty of dadamachines’ approach is, by building this around the grid, you can work in a lot of different ways:

Step-sequence melodies and rhythms in a standard grid view.

Play live – there’s even a MIDI looper – and use standard quantization tools, or not, to decide how much you want your performance to be on the grid.

Trigger patterns one at a time, or in scenes.

Use the touchstrip for additional live control, with beat repeat functions, polyrhythmic loop length, nudge, and velocity (the pads aren’t velocity sensitive, though you can also use an external controller with velocity).

Now you see the logic behind having this enormous 16×12 grid – everything is visible at once. Most hardware, and even devices like Ableton Push, require you to navigate around to individual parts; there’s no way to see the overall sequence. You can bring up dedicated grid pages if you want to focus on playing a particular part or editing a sequence. But there’s an overview page so you also get the big picture – and trigger everything, without menu diving.

dadamachines have set up four views:

Song View – think interactive set list

Scene View – all your available Patterns and Machines

Machine View – focus on one particular instrument and input

Performance View – transform an existing pattern without changing it

And remember, this can be both external gear and internal instruments – with some nice ready-to-play instruments included in the package, or the ability to make your own (in Pd and SuperCollider) if you’re more advanced.

It’s already set up to work with ORAC, the powerful instrument by technobear, featured on the Organelle from Critter & Guitari:

– showing what can happen as devices are open, collaborative, and compatible.

When can you get this?

composer pro is being shown in a fully working – very nice looking – prototype. That also means a chance to get more feedback from musicians.

dadamachines say they plan to put this on sale in late summer.

It’s an amazing accomplishment from an engineering standpoint, from the hands-on time I’ve had with it. I know velocity-sensitive pads will be a disappointment, but I think that also means you’ll be able to afford this and get hardware that’s reliable – and you can always use the touchstrip or connect other hardware for expression.

It also goes beyond what sequencers like the just-announced Pioneer Squid can do, and offers a more intuitive interface than a lot of other boutique offerings – and its openness could support a community exploring ideas. That’s what originally launched grid performance in the first place with monome, but got lost as monome quantities were limited and commercial manufacturers chose to take a proprietary approach.

Stay tuned to CDM as this evolves.

https://dadamachines.com/products/composer-pro/

Press release:

dadamachines announces grid based midi performance sequencer composer pro

composer pro is the new hub for electronic musicians, a missing link for sketching ideas and playing live. It’s a standalone sampler and live instrument, and connects to everything in a studio or onstage, for clock and patterns. And it’s open source and community-powered, ensuring it’s only getting started.

Edit patterns by step, play live on the pads and touch strip, use external controllers – it’s your choice. Sequence and clock external gear, or work with onboard instruments. Clock your whole studio or stage full of gear – and sync via wires or wirelessly.

Finally, there’s a portable device that gives you the control you need, and the big picture on your ideas, while connecting the instruments you want to play. And yes, you’re free to leave the computer at home.

composer pro will be shown to the public the first time at superbooth in Berlin from 9-11th of may. Sales start is planned for late summer 2019.

Play:

Use a massive, RGB, 16×12 grid of pads
192 triggers – 261 buttons in total – but organized, clear, and easy
Step sequence or play live
Melodic and rhythmic/drum modes
MIDI looper
Work with quantization or unquantized
Play on the pads or use external controllers
Touch strip for expression, live sequence transformations, note repeat, and more

Stay connected:

MIDI input/output and sync (via USB-C with computer, USB host, and MIDI DIN)
Analog sync (modular, analog gear)
DIN sync support (for vintage instruments like the TR-808)
USB host – with a built-in 4-port hub
Abeton Link support (USB wifi dongle required for wireless use)
Stereo audio in
Stereo audio out
Headphone, footswitch

Onboard sounds and room to grow:

Internal instruments and effects
Powered by open source sound engines, with internal Raspberry Pi computer core
Includes ORAC by technobear, a powerful sequenced sampler
Arrange productions and set lists:
Full automation sequencing (via MIDI CC)
Trigger patterns, scenes, songs
16-measure sequences, 12 scenes per song
Unlimited song storage (restricted only by SD card capacity)

The post One big, open standalone grid for playing everything: dadamachines composer pro appeared first on CDM Create Digital Music.

Surge is free, deep synth for every platform, with MPE support

Surge is a deep multi-engine digital soft synth – beloved, then lost, then brought back to life as an open source project. And now it’s in a beta that’s usable and powerful and ready on every OS.

I wrote about Surge in the fall when it first hit a free, open source release:

Vember Audio owner @Kurasu made this happen. But software just “being open sourced” often leads nowhere. In this case, Surge has a robust community around it, turning this uniquely open instrument into something you can happily use as a plug-in alongside proprietary choices.

And it really is deep: stack 3 oscillators per voice, use morphable classic or FM or ring modulation or noise engines, route through a rich filter block with feedback and every kind of variation imaginable – even more exotic notch or comb or sample & hold choices, and then add loads of modulation. There are some 12 LFOs per voice, multiple effects, a vocoder, a rotary speaker…

I mention it again because now you can grab Mac (64-bit AU/VST), Windows (32-bit and 64-bit VST), and Linux (64-bit VST) versions, built for you.

And there’s VST3 support.

And there’s support for MPE (MIDI Polyphonic Expression), meaning you can use hardware from ROLI, Roger Linn, Haken, and others – I’m keen to try the Sensel Morph, perhaps with that Buchla overlay.

Now there’s also an analog mode for the envelopes, too.

This also holds great promise for people who desire a deep synth but can’t afford expensive hardware. While Apple’s approach means backwards compatibility on macOS is limited, it’ll run on fairly modest machines – meaning this could also be an ideal starting point for building your own integrated hardware/software solution.

In fact, if you’re not much of a coder but are a designer, it looks like design is what they need most at this point. Plus you can contribute sound content, too.

Most encouraging is really that they are trying to build a whole community around this synth – not just make open source maintenance a chore, but really a shared endeavor.

Check it out now:

https://surge-synthesizer.github.io

Previously:

Powerful SURGE synth for Mac and Windows is now free

The post Surge is free, deep synth for every platform, with MPE support appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNNICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

Alternative modular: pd knobs is a Pure Data-friendly knob controller

pd knobs is a knob controller for MIDI. It’s built with Teensy with open source code – or you can get the pre-built version, with some pretty, apparently nice-feeling knobs. And here it is with the free software Pd + AUTOMATONISM – proof that you don’t need to buy Eurorack just to go modular.

And that’s relevant, actually. Laptops can be had for a few hundred bucks; this controller is reasonably inexpensive, or you could DIY it. Add Automatonism, and you have a virtually unlimited modular of your own making. I love that Eurorack is supporting builders, but I don’t think the barrier to entry for music should be a world where a single oscillator costs what a lot of people spend in a month on rent.

And, anyway, this sounds really cool. Check the demo:

From the creator, Sonoclast:

pd knobs is a 13 knob MIDI CC controller. It can control any software that recognizes MIDI CC messages, but it was obviously designed with Pure Data in mind. I created it because I wanted a knobby interface with nice feeling potentiometers that would preserve its state from session-to-session, like a hardware instrument would. MIDI output is over a USB cable.

For users of the free graphical modular Pd, there are some ready-to-use abstractions for MIDI or even audio-rate control. You can also easily remap the controllers with some simple code.

More:

http://sonoclast.com/products/pd-knobs/

Buy from Reverb.com:

https://reverb.com/item/21147215-sonoclast-pd-knobs-midi-cc-controller

The post Alternative modular: pd knobs is a Pure Data-friendly knob controller appeared first on CDM Create Digital Music.

dadamachines doppler is a new platform for open music hardware

The new doppler board promises to meld the power of FPGA brains with microcontrollers and the accessibility of environments like Arduino. And the founder is so confident that could lead to new stuff, he’s making a “label” to help share your ideas.

doppler is a small, 39EUR development board packing both an ARM microcontroller and an FPGA. It could be the basis of music controllers, effects, synths – anything you can make run on those chips.

If this appeals to you, we’ve even got a CDM-exclusive giveaway for inventors with ideas. (Now, end users, this may all go over your head but … rest assured the upshot for you should be, down the road, more cool toys to play with. Tinkerers, developers, and people with a dangerous appetite for building things – read on.)

But first – why include an FPGA on a development board for music?

The pitch for FPGA

The FPGA is a powerful but rarified circuit. The idea is irresistible: imagine a circuit that could be anything you want to be, rewired as easily as software. That’s kind of what an FPGA is – it’s a big bundle of programmable logic blocks and memory blocks. You get all of that computational power at comparatively low cost, with the flexibility to adapt to a number of tasks. The upshot of this is, you get something that performs like dedicated, custom-designed hardware, but that can be configured on the fly – and with terrific real-time performance.

This works well for music and audio applications, because FPGAs do work in “close to the metal” high performance contexts. And we’ve even seen them used in some music gear. (Teenage Engineer was an early FPGA adopter, with the OP-1.) The challenge has always been configuring this hardware for use, which could easily scare off even some hardware developers.

For more on why open FPGA development is cool, here’s a (nerdy) slide deck: https://fpga.dev/oshug.pdf

Now, all of what I’ve just said a little hard to envision. Wouldn’t it be great if instead of that abstract description, you could fire up the Arduino development environment, upload some cool audio code, and have it running on an FPGA?

doppler, on a breadboard connected to other stuff so it starts to get more musically useful. Future modules could also make this easier.

doppler: easier audio FPGA

doppler takes that FPGA power, and combines it with the ease of working with environments like Arduino. It’s a chewing gum-sized board with both a familiar ARM microcontroller and an FPGA. This board is bare-bones – you just get USB – but the development tools have been set up for you, and you can slap this on a breadboard and add your own additions (MIDI, audio I/O).

The project is led by Johannes Lohbihler, dadamachines founder, along with engineer and artist Sven Braun.

dadamachines also plan some other modules to make it easier to add other stuff us music folks might like. Want audio in and out? A mic preamp? MIDI connections? A display? Controls? Those could be breakout boards, and it seems Johannes and dadamachines are open to ideas for what you most want. (In the meantime, of course, you can lay out your own stuff, but these premade modules could save time when prototyping.)

Full specs of the tiny, core starter board:

120Mhz ARM Cortex M4F MCU 512KB Flash (Microchip ATSAMD51G19A) with FPU
– FPGA 5000 LUT, 1MBit RAM, 6 DSP Cores,OSC, PLL (Lattice ICE40UP5K)
– Arduino IDE compatible
– Breadboard friendly (DIL48)
– Micro USB
– Power over USB or external via pin headers
– VCC 3.5V …. 5.5V
– All GPIO Pins have 3.3V Logic Level
– 1 LED connected to SAMD51
– 4 x 4 LED Matrix (connected to FPGA)
– 2 User Buttons (connected to FPGA)
– AREF Solder Jumper
– I2C (need external pullup), SPI, QSPI Pins
– 2 DAC pins, 10 ADC pins
– Full open source toolchain
– SWD programming pin headers
– Double press reset to enter the bootloader
– UF2 Bootloader with Firmware upload via simple USB stick mode

See also the quickstart PDF.

I’ve focused on the FPGA powers here, because those are the new ones, but the micrcontroller side brings compatibility with existing libraries that allow you to combine some very useful features.

So, for instance, there’s USB host capability, which allows connecting all sorts of input devices, USB MIDI gadgets, and gaming controllers. See:

https://github.com/gdsports/USB_Host_Library_SAMD

That frees up the FPGA to do audio only. Flip it around the other way, and you can use the microcontroller for audio, while the FPGA does … something else. The Teensy audio library will work on this chip, too – meaning a bunch of adafruit instructional content will be useful here:

https://learn.adafruit.com/synthesizer-design-tool?view=all

https://github.com/adafruit/Audio/

doppler is fully open source hardware, with open firmware and code samples, so it’s designed to be easy to integrate into a finished product – even one you might sell commercially.

The software examples for now are mainly limited to configuring and using the board, so you’ll still need to bring your own code for doing something useful. But you can add the doppler as an Arduino library and access even the FPGA from inside the Arduino environment, which expands this to a far wider range of developers.

Look, ma, Arduino!

In a few steps, you can get up and running with the development environment, on any OS. You’ll be blinking lights and even using a 4×4 matrix of lights to show characters, just as easily as you would on an Arduino board – only you’re using an FPGA.

Getting to that stage is lunch break stuff if you’ve at least worked with Arduino before:

https://github.com/dadamachines/doppler

Dig into the firmware, and you can see, for instance, some I/O and a synth working. (This is in progress, it seems, but you get the idea.)

https://github.com/dadamachines/doppler-FPGA-firmware

And lest you think this is going to be something esoteric for experienced embedded hardware developers, part of the reason it’s so accessible is that Johannes is working with Sven Braun. Sven is among other things the developer of iOS apps zmors synth and modular – so you get something that’s friendly to app developers.

doppler in production…

A label for hardware, platform for working together

Johannes tells us there’s more to this than just tossing an open source board out into the world – dadamachines is also inviting collaborations. They’ve made doppler a kind of calling card for working together, as well as a starting point for building new hardware ideas, and are suggesting Berlin-based dadamachines as a “label” – a platform to develop and release those ideas as products.

There are already some cool, familiar faces playing with these boards – think Meng Qi, Tom Whitwell of Music thing, and Ornament & Crime.

Johannes and his dadamachines have already a proven hardware track record, bringing a product from Kickstarter funding to manufacturing, with the automat. It’s an affordable device that makes it easy to connect physical, “robotic” outputs (like solenoids and motors). (New hardware, a software update and more are planned for that, too, by the way.) And of course, part of what you get in doing that kind of hardware is a lot of amassed experience.

We’ve seen fertile open platforms before – Arduino and Raspberry Pi have each created their own ecosystems of both hardware and education. But this suggests working even more closely – pooling space, time, manufacturing, distribution, and knowledge together.

This might be skipping a few steps – even relatively experienced developers may want to see what they can do with this dev board first. But it’s an interesting long-range goal that Johannes has in mind.

Want your own doppler; got ideas?

We have five doppler boards to give away to interested CDM readers.

Just tell dadamachines what you want to make, or connect, or use, and email that idea to:

cdm@dadamachines.com

dadamachines will pick five winners to get a free board sent to them. (Johannes says he wants to do this by lottery, but I’ve said if there are five really especially good ideas or submissions, he should… override the randomness.)

And stay tuned here, as I hope to bring you some more stuff with this soon.

For more:

https://forum.dadamachines.com/

https://dadamachines.com/product/doppler/

The post dadamachines doppler is a new platform for open music hardware appeared first on CDM Create Digital Music.

Plankton Electronics launches Kickstarter for SPICE Modular Saturation Unit

Plankton Electronics Spice

Plankton Electronics has announced the launch of a Kickstarter project for its SPICE modular distortion desktop unit. SPICE is a saturator unit capable of a huge variety of sounds of colors. From subtle tube warmness to extreme fuzz distortion. SPICE features Rackable (38HP Eurorack) modular desktop unit. 8 CV inputs with led level indicators. 6 […]

The post Plankton Electronics launches Kickstarter for SPICE Modular Saturation Unit appeared first on rekkerd.org.

How to make a multitrack recording in VCV Rack modular, free

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Big thanks to chaircrusher for this tip and some other ones that informed this article – do go check his music.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

(Alternatively, if you need extra features like sends, there’s the mscHack mixer, though it’s more complex and less attractive.)

Prep that signal. You might also consider a DC Offset and Compressor between your raw sources and the recording. (Thanks to Jim Aikin for that tip.)

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

https://ffmpeg.org/

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

https://www.sounddevices.com/products/accessories/software/wave-agent

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

Bonus tutorial here – the other thing apart from recording you’ll obviously want with VCV Rack is some hands-on control. Here’s a nice tutorial this week on working with BeatStep Pro from Arturia (also a favorite in the hardware modular world):

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.

https://vcvrack.com/

Previously:

Step one: How to start using VCV Rack, the free modular software

How to make the free VCV Rack modular work with Ableton Link

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

VCV Rack nears 1.0, new features, as software modular matures

VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.

Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.

Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.

New for end users

VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.

Here’s an overview:

  • Polyphony, polyphonic cables, polyphonic MIDI support and MPE
  • Multithreading and hardware acceleration
  • Tooltips, manual data entry, and right-click menus to more information on modules
  • Virtual CV to MIDI and direct MIDI mapping
  • 2.0 version coming with fully-integrated DAW plug-in

More on that:

Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:

https://community.vcvrack.com/t/how-polyphonic-cables-will-work-in-rack-v1/

New modules will help you manage this.

Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.

Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)

Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.

Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.

Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –

Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.

And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…

Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”

These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)

VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.

Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.

New for developers

Rack v1 is under a BSD license – proper free and open source software. There’s even a mission statement that deals with this.

Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.

https://vcvrack.com/manual/Migrate1.html

You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)

Plus check this port for adding polyphony to your stuff.

And in other Rack news…

Also worth mentioning:

While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.

https://community.vcvrack.com/

It’s powered by open source forum software Discourse.

For a bunch of newly free add-ons, check out the wonder XFX stuff (I paid for at least one of these, and would do again if they add more commercial stuff):

http://blamsoft.com/vcv-rack/

Vult is a favorite of mine, and there’s a great review this week, with 79 demo patches too:

There’s also a new version of Mutable Instruments Tides, Tidal Modular 2, available in the Audible Instruments Preview add-on – and 80% of your money goes to charity.

https://vcvrack.com/AudibleInstruments.html#preview

And oh yeah, remember that in the fall Rack already added support for hosting VST plugins, with VST Host. It will even work inside the forthcoming plugin, so you can host plugins inside a plugin.

https://vcvrack.com/Host.html

Here it is with the awesome d16 stuff, another of my addictions:

Great stuff. I’m looking forward to some quality patching time.

http://vcvrack.com/

The post VCV Rack nears 1.0, new features, as software modular matures appeared first on CDM Create Digital Music.

Synth One is a free, no-strings-attached, iPad and iPhone synthesizer

Call it the people’s iOS synth: Synth One is free – without ads or registration or anything like that – and loved. And now it’s reached 1.0, with iPad and iPhone support and some expert-designed sounds.

First off – if you’ve been wondering what happened to Ashley Elsdon, aka Palm Sounds and editor of our Apps section, he’s been on a sabbatical since September. We’ll be thinking soon about how best to feature his work on this site and how to integrate app coverage in the current landscape. But you can read his take on why AudioKit matters, and if Ashley says something is awesome, that counts.

But with lots of software synths out there, why does Synth One matter in 2019? Easy:

It’s really free. Okay, sure, it’s easy for Apple to “give away” software when they make more on their dongles and adapters than most app developers charge. But here’s an independent app that’s totally free, without needing you to join a mailing list or look at ads or log into some cloud service.

It’s a full-featured, balanced synth. Under the hood, Synth One is a polysynth with hybrid virtual analog / FM, with five oscillators, step sequencer, poly arpeggiator, loads of filtering and modulation, a rich reverb, multi-tap delay, and loads of etras.

There’s standards support up the wazoo. Are you visually impaired? There’s Voice Over accessibility. Want Ableton Link support? MIDI learn on everything? Compatibility with Audiobus 3 and Inter App Audio so you can run this in your favorite iOS DAW? You’re set.

It’s got some hot presets. Sound designer Francis Preve has been on fire lately, making presets for everyone from KORG to the popular Serum plug-in. And version 1.0 launches with Fran’s sound designs – just what you need to get going right away. (Fran’s sound designs are also usually great for learning how a synth works.)

It’s the flagship of an essential framework. Okay the above matters to users – this matters to developers (who make stuff users care about, naturally). Synth One is the synthesizer from the people who make AudioKit. That’s good for making sure the framework is solid, plus

You can check out the source code. Everything is up at github.com/AudioKit/AudioKitSynthOne – meaning Synth One is also an (incredibly sophisticated) example app for Audio Kit.

More is coming… MPE (MIDI Polyphonic Expression) and AUv3 are coming soon, say the developers.

And now the big addition —

It runs on iPhone, too. I have to say, I’ve been waiting for a synth that’s pocket sized for extreme portability, but few really are compelling. Now you can run this on any iPhone 6 or better – and if you’ve got a higher-end iPhone (iPhone X/XS/XR / iPhone XS Max / 6/7/8 Plus size), you’ll get a specially optimized UI with even more space.

Check out this nice UI:

On iPhone:

More:

AudioKit Synth One 1.0 arrives, is universal, is awesome

The post Synth One is a free, no-strings-attached, iPad and iPhone synthesizer appeared first on CDM Create Digital Music.