Save 20% off Toontrack’s Drum & EZkeys MIDI and EZmix 2 Preset Packs

Toontrack MIDI Drums 20 OFF feat

Plugin Boutique has launched a sale on Toontrack, offering a 20% discount on all Drum and EZkeys MIDI packs and presets packs for the EZmix 2 mixing tool through March. From extreme metal, jazz and americana to blues, pop and rock, Toontrack’s Drum MIDI line offers pro-played drum MIDI for any songwriting need. The sale […]

The post Save 20% off Toontrack’s Drum & EZkeys MIDI and EZmix 2 Preset Packs appeared first on rekkerd.org.

Bremmers Audio Design updates MultitrackStudio for iPad to v3.2

Bremmers multitrackstudio ipad

Bremmers Audio Design has released an update to its MultitrackStudio for iPad, an audio/MIDI multitrack recording app featuring high quality audio effects including a guitar amp simulator. Both audio and MIDI tracks can be edited. MIDI editing features include pianoroll, drum and score editors. The straightforward user-interface has been designed with tape-based recording in mind. […]

The post Bremmers Audio Design updates MultitrackStudio for iPad to v3.2 appeared first on rekkerd.org.

Scaler v1.7 update introduces several new user-requested features & enhancements

Plugin Boutique Scaler 1.7 Bass

Plugin Boutique has announced the release of an update to its Scaler creative chord composer plugin, introducing several new user-requested features and enhancements. Version 1.7 comes with new chord sets, new bass tuning, preview of chord sets, and lots more. There are 30 new Blues, Latin and Bossa Nova chord sets and over one hundred […]

The post Scaler v1.7 update introduces several new user-requested features & enhancements appeared first on rekkerd.org.

Delectable Records releases Beyond Deep Tech & Future House Piano

Delectable Records Beyond Deep Tech

Delectable Records has launched the Beyond Deep Tech sample collection, featuring over 480 MB of sounds perfect for all Minimal and Deep Tech productions. Influenced by the hottest contemporary club sounds, Beyond Deep Tech contains 369 loops, 237 One Shots and 34 sampler patches Battery Exs24 Kontakt Nnxt multi formats, producers addicted to Minimal and […]

The post Delectable Records releases Beyond Deep Tech & Future House Piano appeared first on rekkerd.org.

Mod Bits II modular synth sample pack released by OhmLab

OhmLab Mod Bits II

OhmLab has announced a new release in its popular modular synth sample library series. Mod Bits II features a diverse collection of interesting percussive sounds. Modular. Layered. Twisted. Punchy. Textured. Original. Funky. Useful. Mod Bits II is a diverse collection of interesting percussive sounds developed here in the lab. Each sample is a combination of […]

The post Mod Bits II modular synth sample pack released by OhmLab appeared first on rekkerd.org.

MidiWrist aids instrumentalists by giving Siri and Apple Watch control

Grabbing the mouse, keyboard, or other controller while playing an instrument is no fun. Developer Geert Bevin has a solution: put an Apple Watch or (soon) iPhone’s Siri voice command in control.

We’ve been watching MidiWrist evolve over the past weeks. It’s a classic story of what happens when a developer is also a musician, making a tool for themselves. Geert has long been an advocate for combining traditional instrumental technique and futuristic electronic instruments; in this case, he’s applying his musicianship and developer chops to solving a practical issue.

If you’ve got an iPhone but no watch – like me – there are some solutions coming (more on that in a bit). But Apple Watch is really ideally suited to the task. The fact that you have the controller strapped to your body already means controls are at hand. Haptic feedback on the digital crown means you can adjust parameters without even having to look at the display. (The digital crown is the dial on the side of the watch that was used to wind and/or set time on analog watches. Haptic feedback uses sound to give physical feedback in the way a tangible control would, both on that crown and the touch surface of the watch face – what Apple calls “taptic” feedback since it works with the existing touch interface. Even if you’re not a fan of the Apple Watch, it’s a fascinating design feature.)

How this works in practice: you can use the transport and even overdub new tracks easily, here pictured in Logic Pro X:

Just seeing the Digital Crown mapped as a new physical control is a compelling tech demo – and very useful to mobile apps, which tend to lack physical feedback. Here it is in a pre-release demo with the Minimoog Model D on iPhone:

Or here it is with the Eventide H9 (though, yeah, you could just put the pedal on a table and get the same impact):

Here it is with IK Multimedia’s UNO synth, though this rather makes me wish the iPhone just had its own Digital Crown:

Version 1.1 will include voice control via Siri. That’ll work with iPhones, too, so you don’t necessarily need an Apple Watch. With voice-controlled interfaces coming to various home devices, it’s not hard to imagine sitting at home and recording ideas right when the mood strikes you, Star Trek: The Next Generation style.

Geert, please, can we set up a DAW that lets us dictate melodies like this?

It’s a simple app at its core, but you see it really embodies three features: wearable interfaces, hands-free control (with voice), and haptic feedback. And here are lots of options for custom control, MIDI functionality, and connectivity. Check it out – this really is insane for just a watch app:

Four knobs can be controlled with the digital crown
Macro control over multiple synth parameters from the digital crown
Remotely Play / Stop / Record / Rewind your DAW from your Watch
Knobs can be controlled individually or simultaneously
Knobs can be linked to preserve their offsets
Four buttons can be toggled by tapping the Watch
Buttons can either be stateful or momentary
Program changes through the digital crown or by tapping the Watch
Transport control over Midi Machine Control (MMC)
XY pad with individual messages for each axis
Optional haptic feedback for all Watch interactions
Optional value display on the Watch
Configurable colors for all knobs and buttons
Configurable MIDI channels and CC numbers
Save your configurations to preset for easy retrieval
MIDI learn for easy controller configuration
MIDI input to sync the state of the controllers with the controlled synths
Advertise as a Bluetooth MIDI device
Connect to other Bluetooth MIDI devices
Monitor the MIDI values on the iPhone
Low latency and fast response

http://uwyn.com/midiwrist/

All of this really does make me want a dedicated DIY haptic device. I had an extended conversation with the engineers at Native Instruments about their haptic efforts with TRAKTOR; I personally believe there’s a lot of potential for human-machine interfaces for music with this approach. But that will depend in the long run on more hardware adopting haptic interfaces beyond just the passive haptics of nice-feeling knobs and faders and whatnot.

It’s a good space to keep an eye on. (I almost wrote “a good space to watch.” No. That’s not the point. You know.)

Geert shares a bit about development here:

Fun anecdote — in a way, this app has been more than three years in the making. I got the first Apple Watch in the hope of creating this, but the technology was way too slow without a direct real-time communication protocol between the Watch and the iPhone. I’ve been watching every Watch release (teehee) up until the last one, the Series 4. The customer reception was so good overall that I decided to give this another go, and only after a few hours of prototyping, I could see that this would now work and feel great. I did buy a Watch Series 3 afterwards also to include in my testing during development.

The post MidiWrist aids instrumentalists by giving Siri and Apple Watch control appeared first on CDM Create Digital Music.

Toontrack releases Pop Punk MIDI pack for EZdrummer 2 & Superior Drummer 3

Toontrack Pop Punk MIDI

Toontrack has announced the release of the Pop Punk MIDI pack, a collection of drum MIDI grooves capturing the fundamentals of the pop punk genre. Following the recent Pop Punk EZX by John Feldmann, this new collection comprises over 400 drum grooves and fills, tailored for your yet unwritten pop punk songs. Since classic acts […]

The post Toontrack releases Pop Punk MIDI pack for EZdrummer 2 & Superior Drummer 3 appeared first on rekkerd.org.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Midi Madness 3 algorithmic melody generator on sale for $59.95 USD!

Midi Madness 3 sale 20 off PIB

Plugin Boutique has announced an exclusive sale on the Midi Madness 3 world-class MIDI generator software for Windows and Mac. Midi Madness 3 can create an unlimited number of melodies using a simple set of probability weightings. Simply set some parameters, such as a chord sequence and some MIDI controllers, and let Midi Madness go […]

The post Midi Madness 3 algorithmic melody generator on sale for $59.95 USD! appeared first on rekkerd.org.

Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII

Now the Nintendo NES inspires a new groovebox, with the desktop hapiNES. And not to be outdone, Twisted Electrons’ acid line is back with a MKIII model, too.

Twisted Electrons have been making acid- and chip music-flavored groovemakers of various sorts. That started with enclosed desktop boxes like the Acid8. But lately, we’d gotten some tiny models on exposed circuit boards, inspired by the Pocket Operator line from Teenage Engineering (and combining well with those Swedish devices, too).

Well, if you liked that Nintendo-flavored chip music sound but longer for a finished case and finger-friendly proper knobs and buttons, you’re in luck. The hapiNES L is here in preorder now, and shipping next month. It’s a groovebox with a 303-style sequencer and tons of parameter controls, but with a sound engine inspired by the RP2A07 chip.

“RP2A07” is not something that likely brings you back to your childhood (uh, unless you spent your childhood on a Famicom assembly line in Japan for some reason – very cool). Think to the Nintendo Entertainment System and that unique, strident sound from the video games of the era – here with controls you can sequence and tweak rather than having to hard-code.

You get a huge range of features here:

Hardware MIDI input (sync, notes and parameter modulation)
Analog trigger sync in and out
USB-MIDI input (sync, notes and parameter modulation)
Dedicated VST/AU plugin for full DAW integration
4 tracks for real-time composing
Authentic triangle bass
2 squares with variable pulsewidth
59 synthesized preset drum sounds + 1 self-evolving drum sound
16 arpeggiator modes with variable speed
Vibrato with variable depth and speed
18 Buttons
32 Leds
6 high quality potentiometers
16 pattern memory
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining (up to 16 patterns/ 256 steps)
Pattern copy/pasting
Ratcheting (up to 4 hits per step)
Reset on any step (1-16 step patterns)

If you want to revisit the bare board version, here you go:

255EUR before VAT.

https://twisted-electrons.com/product/hapines-l/

Okay, so that’s all well and good. But if you want an original 8-bit synth, the Acid8 is still worth a look. It’s got plenty of sound features all its own, and the MKIII release loads in a ton of new digital goodies – very possibly enough to break the Nintendo spell and woo you away from the NES device.

In the MKIII, there’s a new digital filter, new real-time effects (transposition automation, filter wobble, stutter, vinyl spin-down, and more), and dual oscillators.

Dual oscillators alone are interesting, and the digital filter gives this some of the edge you presumably crave if drawn to this device.

And if you are upgrading from the baby uAcid8 board, you add hardware MIDI, analog sync in and out, and of course proper controls and a metal case.

Specs:

USB-MIDI input (sync, notes and parameter modulation)
Hardware MIDI input (sync, notes and parameter modulation)
Analog sync trigger input and output
Dedicated VST/AU plugin for full DAW integration
18 Buttons
32 Leds
6 high quality potentiometers
Arp Fx with variable depth and decay time
Filter Wobble with variable speed and depth
Crush Fx with variable depth
Pattern Copy/Pasting
Variable VCA decay (note length)
Tap tempo, variable Swing
Patterns can reset at any step (1-16 step pattern lengths)
Variable pulse-width (for square waveforms)
12 sounds: Square, Saw and Triangle each in 4 flavors (Normal, Distorted, Fat/Detuned, Harmonized/Techno).
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining

Again, we have just the video of the board, but it gives you idea. Quite clever, really, putting out these devices first as the inexpensive bare boards and then offering the full desktop releases.

More; also shipping next month with preorders now:

https://twisted-electrons.com/product/acid8-mkiii/

The post Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII appeared first on CDM Create Digital Music.