This plug-in is a secret weapon for sound design and drums

It’s full of gun sounds. But because of a combination of a unique sample architecture and engine and a whole lot of unique assets, the Weaponiser plug-in becomes a weapon of a different kind. It helps you make drum sounds.

Call me a devoted pacifist, call me a wimp – really, either way. Guns actually make me uncomfortable, at least in real life. Of course, we have an entirely separate industry of violent fantasy. And to a sound designer for games or soundtracks, Weaponiser’s benefits should be obvious and dazzling.

But I wanted to take a different angle, and imagine this plug-in as a sort of swords into plowshares project. And it’s not a stretch of the imagination. What better way to create impacts and transients than … well, fire off a whole bunch of artillery at stuff and record the result? With that in mind, I delved deep into Weaponiser. And as a sound instrument, it’s something special.

Like all advanced sound libraries these days, Weaponiser is both an enormous library of sounds, and a powerful bespoke sound engine in which those sounds reside. The Edinburgh-based developers undertook an enormous engineering effort here both to capture field recordings and to build their own engine.

It’s not even all about weapons here, despite the name. There are sound elements unrelated to weapons – there’s even an electronic drum kit. And the underlying architecture combines synthesis components and a multi-effects engine, so it’s not limited to playing back the weapon sounds.

What pulls Weaponiser together, then, is an approach to weapon sounds as a modularized set of components. The top set of tabs is divided into ONSET, BODY, THUMP, and TAIL – which turns out to be a compelling way to conceptualize hard-hitting percussion, generally. We often use vaguely gunshot-related metaphors when talking about percussive sounds, but here, literally, that opens up some possibilities. You “fire” a drum sound, or choose “burst” mode (think automatic and semi-automatic weapons) with an adjustable rate.

This sample-based section is then routed into a mixer with multi-effects capabilities.

In music production, we’ve grown accustomed to repetitive samples – a Roland TR clap or rimshot that sounds the same every single time. In foley or game sound design, of course, that’s generally a no-no; our ears quickly detect that something is amiss, since real-world sound never repeats that way. So the Krotos engine is replete with variability, multi-sampling, and synthesis. Applied to musical applications, those same characteristics produce a more organic, natural sound, even if the subject has become entirely artificial.

Weaponiser architecture

Let’s have a look at those components in turn.

Gun sounds. This is still, of course, the main attraction. Krotos have field recordings of a range of weapons:

AK 47
Berretta 92
Dragunov
GPMG
SPAS 12
CZ75
GPMG
H&K 416
M 16
M4 (supressed)
MAC 10
FN MINIMI
H&K MP5
Winchester 1887

For those of you who don’t know gun details, that amounts to pistol, rifle, automatic, semiautomatic, and submachine gun (SMG). These are divided up into samples by the onset/body/thump/tail architecture I’ve already described, plus there are lots of details based on shooting scenario. There are bursts and single fires, sniper shots from a distance, and the like. But maybe most interesting actually are all the sounds around guns – cocking and reloading vintage mechanical weapons, or the sound of bullets impacting bricks or concrete. (Bricks sound different than concrete, in fact.) There are bullets whizzing by.

And that’s just the real weapons. There’s an entire bank devoted to science fiction weapons, and these are entirely speculative. (Try shooting someone with a laser; it … doesn’t really work the way it does in the movies and TV.) Those presets get interesting, too, because they’re rooted in reality. There’s a Berretta fired interdimensionally, for example, and the laser shotguns, while they defy present physics and engineering, still have reloading variants.

In short, these Scottish sound designers spent a lot of time at the shooting range, and then a whole lot more time chained to their desk working with the sampler.

Things that aren’t gun sounds. I didn’t expect to find so many sounds in the non-gun variety, however. There are twenty dedicated kits, which tend in a sort of IDM / electro crossover, just building drum sounds on this engine. There are a couple of gems in there, too – enough so that I could imagine Krotos following up this package with a selection of drum production tools built on the Weaponiser engine but having nothing to do with bullets or artillery.

Until that happens, you can think of that as a teaser for what the engine can do if you spend time building your own presets. And to that end, you have some other tools:

Variations for each parameter randomize settings to avoid repetition.

Four engines, each polyphonic with their own sets of samples, combine. But the same things that allow you different triggering/burst modes for guns prove useful for percussion. And yes, there’s a “drunk” mode.

A deep multi-effects section with mixing and routing serves up still more options.

Four engines, synthesis. Onset, Body, Thump, and Tail each have associated synthesis engines. Onset and Body are specialized FM synthesizers. Thump is essentially a bass synth. Tail is a convolution reverb – but even that is a bit deeper than it may sound. Tail provides both audio playback and spatialization controls. It might use a recorded tail, or it might trigger an impulse response.

Also, the way samples are played here is polyphonic. Add more samples to a particular engine, and you will trigger different variants, not simply keep re-triggering the same sounds over and over again. That’s the norm for more advanced percussion samplers, but lately electronic drum engines have tended to dumb that down. And – there’s a built-in timeline with adjustable micro-timings, which is something I’ve never seen in a percussion synth/sampler.

The synth bits have their own parameters, as well, and FM and Amplitude Modulation modes. You can customize carriers and modulators. And you can dive into sample settings, including making radical changes to start and end points, envelope, and speed.

Effects and mixing. Those four polyphonic engines are mixed together in a four-part mix engine, with multi-effects that can be routed in various ways. Then you can apply EQ, Compression, Limiting, Saturation, Ring Modulation, Flanging, Transient Shaping, and Noise Gating.

Oh, you can also use this entire effects engine to process sounds from your DAW, making this a multi-effects engine as well as an instrument.

Is your head spinning yet?

About the sounds

Depending on which edition you grab, from the limited selection of the free 10-day demo up to the “fully loaded” edition, you’ll get as many as 2228 assets, with 1596 edited weapon recordings. There are also 692 “sweeteners” – a grab bag of still more sounds, from synths to a black leopard (the furry feilne, really), and the sound recordists messing around with their recording rig, keys, Earth, a bicycle belt… you get the idea. There are also various impulse responses for the convolution reverb engine, allowing you to place your sound in different rooms, stairwells, and synthetic reverbs.

The recording chain itself is worth a look. There are the expected mid/side and stereo recordings, classic Neumann and Sennheiser mics, and a whole lot of use by the Danish maker DPA – including mics positioned directly on the guns in some recordings. But they’ve also included recordings made with the Sennheiser Ambeo VR Mic for 360-degree, virtual reality sound.

They’ve shared some behind-the-scenes shots with CDM, and there’s a short video explaining the process.

In use, for music

Some of the presets are realistic enough that it did really make me uncomfortable at first working with these sounds in a music project – but that was sort of my aim. What I found compelling is, because of this synth engine, I was quickly able to transform those sounds into new, organic, even unrecognizable variations.

There are a number of strategies here that make this really interesting.

You can mess with samples. Adjusting speed and other parameters, as with any samples, of course gives you organic, complex new sounds.

There’s the synthesis engine. Working with the synth options either to reprocess the sounds or on their own allows you to treat Weaponiser basically as a drum synth.

The variations make this sound like acoustic percussion. With subtle or major variations, you can produce sound that’s less repetitive than electronic drums would be.

Mix and match. And, of course, you have presets to warp and combine, the ability to meld synthetic sounds and gun sounds, to sweeten conventional percussion with those additions (synths and guns and leopard sounds)… the mind reels.

Routing, of course is vital, too; here’s their look at that:

In fact, there’s so much, that I could almost go on a separate tangent just working with this musically. I may yet do that, but here is a teaser at what’s possible – starting with the obvious:

But I’m still getting lost in the potential here, reversing sounds, trying the drum kits, working with the synth and effects engines.

The plug-in can get heavy on CPU with all of that going on, obviously, but it’s also possible to render out layers or whole sounds, useful both in production and foley/sound design. Really, my main complaint is the tiny, complex UI, which can mean it takes some time to get the hang of working with everything. But as a sound tool, it’s pretty extraordinary. And you don’t need to have firing shotguns in all your productions – you can add some subtle sweetening, or additional layers and punch to percussion without anyone knowing they’re hearing the Krotos team messing with bike chains and bullets hitting bricks and an imaginary space laser.

Weaponiser runs on a Mac or PC, 64-bit only VST AU AAX. You’ll need about five and a half gigs of space free. Basic, which is already pretty vast, runs $399 / £259/ €337. Full loaded is over twice that size, and costs $599 / £379 / €494.

https://www.krotosaudio.com/weaponiser/

The post This plug-in is a secret weapon for sound design and drums appeared first on CDM Create Digital Music.

FeelYourSound updates Sundog Song Studio with Chord FX & more

FeelYourSound Sundog Song Studio 3.1FeelYourSound has released version 3.3.0 of Sundog Song Studio, the electronic song-writing solution for Windows and Mac. With Sundog it is possible to develop new chord progressions, melodies, basslines, and arpeggios within minutes. The standalone composition software connects to any DAW via MIDI. Sundog 3.3 includes a new feature called “Chord FX”. Chord FX can […]

Output’s Arcade is a cloud-based loop library you play as an instrument

The future of soundware is clearly on-demand, crafted sounds from the cloud. Output adds a twist: don’t just give you new sounds, but give you a way to play them and make them your own.

So, the latest product from the LA-based sound boutique Output is called “Arcade” – so play, get it?

And it’s an early entry and fresh take on an area that’s set to heat up fast. To get things rolling here, your first 100 days are completely free; then you pay a monthly subscription rate of $10 a month (with cancellation whenever you want, and you don’t even lose access to your sounds).

As the number of producers grows, and the diversity of the music they may want to make seems to grow, too, as genres and niches spill over and transform at Internet speed, the need to deliver music makers sound and inspiration seems a major opportunity. We’re seeing subscription-based models (Native Instruments’ Sounds.com, Splice) and à la carte models (Loopmasters). And we’re seeing different ideas about how to organize releases (around genre, producer, sound house, or more curated selections around a theme), plus how to integrate tools for users.

Here’s where Arcade is interesting. It’s really a single, integrated instrument. And its goal is to find you exactly the sound you need, right away, easily — but also to give you the ability to completely transform that sound and make it your own, even loading your own found samples.

That’s important, because it bridges the divide between loops as a way of employing someone else’s content, and sound sampling as a DIY, personal affair, with a spectrum in between.

I suspect a lot of you reading have been all over that spectrum. Let’s consider even the serious, well-paid producer. You’ve got a tight scoring deadline, and the job needs a really particular sound, and you’re literally counting the minutes and sweating. Or… you’ve got a TV ad spot, and you need to make something sound completely original, and not like any particular preset you’ve heard before.

This also really isn’t about beginners or advanced users. An advanced user may have a very precise sound in mind – even to sit atop some meticulously hand-crafted sounds. And one of the first things a lot of beginners like to do is mess around with samples they record with a mic. (How many kids made noises into a Casio SK-1 in the 80s?)

I got to sit down with Output CEO and founder Gregg Lehrman, and we took a deep look at Arcade and had a long talk about what it’s about and what the future of music making might be. We’ll look more in detail at how you can use this as an instrument, but let’s cover what this is.

Walkthrough:

Choose your DAW – here’s Arcade running inside Ableton.

It’s a plug-in. This is important. You’ll always be interacting with Arcade as a plug-in, inside your host of choice – so no shuttling back and forth to a Website, as some solutions currently make you do. Omni-format – Mac AU / VST / VST3 / AAX, Windows VST / VST3 / AAX 32- and 64-bit. (Native Linux support would be nice, actually, but that’s missing for now.)

Sounds can match your tempo and key. You can hear sounds in their original form, or conform to the tempo and pitch that matches your current project. (Loopmasters does this too, actually, but via a separate app combined with a plug-in, which is a bit clunky.)

Browse through curated collections of sounds, which are paid for by subscription, Spotify/Netflix-style.

It lets you find sounds online. On-demand cloud browsing lets you check out selections of sounds, complete kits, and loops. You can preview all of these right away. Now, Netflix-style, Output promises new stuff every day, so you can browse around for something to inspire you if you’re feeling stuck. And at least in the test, these were organized with a sense of style and character – more like looking at the output of a music label than the usual commodity catalog of samples.

Search, browse, tagging, and the usual organizing tools are there, too – but it’s probably the preview and curation that puts this over the top.

— but it works if you’re offline, too. Prefer to switch the Internet off in your studio to avoid distractions? Work in an underground bunker, or in the hollowed out volcano island you use as an evil lair? Happily, you don’t need an Internet connection to work.

The keyboard (or whatever MIDI controller you’ve mapped) triggers loops, but also manipulates them on the fly. That lets you radically transform samples as you play – including your own sounds.

You can play the loops as an instrument. Okay, so the whole reason we went into music presumably is that we love the process of making music. Output excels here by letting you load loops into a 15-voice synth, then mangle and warp and modify the sound. It works really well from a keyboard or other MIDI controller.

This isn’t a sample playback instrument in the traditional sense, in terms of how it maps to pitch. Instead, white notes trigger samples, and black notes trigger modifiers. That’s actually really crazy to play around with, because it feels a little like you’re doing production work – the usual mouse-based chores of editing and modifying samples – as you play along, live.

There’s also input quantize, in case your keyboard skills aren’t so precise.

There are tons of modifiers and modulation and effects. Like all of Output’s products, the recipe is, build a deep architecture, then encapsulate it in an easy interface. That way, you can mess around with a simple control and make massive changes, which gets you discovering possibilities fast, but also go further into the structure if you want to get more specific about your needs, and if you’re willing to invest more time.

In this case, Arcade is built around four main sliders that control the character of the sound, both subtle and radical, and then another eleven effects and a deep mixing, routing, and modulation engine underneath.

So, let’s get into specifics.

Each Loop Voice – up to 15 of them – has a whole bunch of controls. It really would be fair to call this a synth:

• Multimode Filter with Resonance and Gradual/Steep Curve
• Volume, Pan, Attack/Release and Loop Crossfade
• Speed Control (1/4, 1/2, x1, x2)
• Tune Control (+/- 24)
• Loop Playback (Reverse, Pendulum, Loop On/Off, Hold)
• FX Sends Pre/Post x2
• Modifier Block
• Sync On/Off

Loop editing.

There’s also a time/pitch stretch engine with both “efficient” and resource-intensive “high quality” modes:

• BPM & Time signature Control
• Key Signature control
• Formant Pitch Control

Since the point is playing, you can map to velocity sensitivity, too, so how hard you hit keys impacts filter cutoff and resonance, voulme and formant.

But you have stuff that can do all the above. It’s the modifiers that get interesting – little macros that are accessible as you play:

• ReSequence (16 steps with Volume, Mute, Reverse, Speed, Length and Attack
Decay control per step)
• Playhead (Speed, Reverse, Loop On/Off, Cup Point per Loop)
• Repeater (Note Repeat with Rate, Reverse, Volume Step Sequencer)
Session Key Control

Plus there’s a Resequencer for sequencing sound slices into new combinations.

The Resequencer gives you even more options for manipulating audio content and turning it into something new.

– combined with modulation:

• LFO/Step (x2) Sync/Free mode with Retrigger and Rate.
• Waveshape Control
• Attack, Phase, Chaos and Polarity Control

Deep modulation options either power presets – or your own sound creations, if you’re ready to tinker.

And there’s a complete mixer:

• 15 Channel Mixer with Volume, Pan, Pre/Post Send FX(x2), Solo
• Send Bus (x2) with Volume, Pan and Mute
• 2 insert FX slots per Bus
• Master Bus with Volume, Pan, Mute and 4 Insert FX slots

The Mixer combines up to 16 parts.

Plus a whole mess of effects. Those effects helped define the character of earlier Output instruments, so it’s great to see here:

• Chorus
• Compressor
• MultiTap Delay
• Stereo Delay
• Distortion Box
• Equalizer
• Filter
• Limiter
• LoFi
• Phaser
• Reverb

It wouldn’t be an Output product without some serious multi-effects options.

But if Output likes to pitch itself as the “secret sauce” behind everything from Kendrick Lamar to the soundtracks for Black Panther and Stranger Things, I absolutely adore that you can load your own samples.

Native Instruments has built a great ecosystem around their tools, including Kontakt – and Output have made use of that engine. But it’s great to see this ground-up creation that introduces some different paradigms around want to do with sampled sound. That instant access to playing – to tapping into your muscle memory, your improvisation skills – I think could be really transformative. We’ve seen artists like Tim Exile advocate this approach in how he works, and it’s an element in a lot of great improvisers’ electronic work. What Output have done is allow you to combine sound discovery with instant playability.

The subscription model means you don’t have to reach for your credit card when you find sounds you want. But if you cancel the $10 a month subscription, unlike a Spotify or Netflix, you don’t lose access to your sounds. Output says:

If you open an older session, you will be prompted to log in, and you will not be able to click past the log in screen. You will be able to play back any MIDI or automation data recorded in your saved session. It will sound exactly the same, but you won’t be able to browse or tweak the character of the sound within the plugin. The midi can still be changed as the preset stays loaded in a session as long as you don’t uninstall Arcade which will remove all the audio samples. The best way to see what I mean is to test it yourself. Put ARCADE into a midi track, then log out of the plug-in. With the GUI still open albeit stuck on the log-in screen, play your track and hit some keys.

The fact that it’s all powered by subscription also means you’ll always have something there to use. But I do hope for the sake of sound creators – and because this engine is so cool – that Output also consider à la carte purchasing of some sounds selections. That could support more intensive sound design processes. And the interface looks like it’d work well as a shop, too. I share some of the concerns of sound artists that subscription models could hurt sound design in the way that they have music downloading. And — on the other hand, to use downloading as an example, a lot of us have both a subscription and buy a tone of stuff from Bandcamp, including underground music.

Let us know what you think.

I’ll be back with a guide to how to load your own sounds and play this as an instrument / design and modify sounds in a more sophisticated way.

More:

https://output.com/arcade

The post Output’s Arcade is a cloud-based loop library you play as an instrument appeared first on CDM Create Digital Music.

A look at AI’s strange and dystopian future for art, music, and society

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate

https://www.bell-labs.com/var/articles/discussion-sougwen-chung-about-human-robotic-collaborations/

Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:

OUR FRIENDS ELECTRIC

Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.

https://www.ctm-festival.de/news/

The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

Ableton’s Creative Extensions are a set of free tools for sound, inspiration

On the surface, Ableton’s new free download today is just a set of sound tools. But Ableton also seem focused on helping you find some inspiration to get ideas going.

Creative Extensions are now a free addition to Live 10. They’re built in Max for Live, so you’ll need either Ableton Live 10 Suite or a copy of Live 10 Standard and Max for Live. (Apparently some of you do fit the latter scenario.)

To find the tools, once you have those prerequisites, you’ll just launch the new Live 10 browser. then click Packs in the sidebar, and Creative Extensions will pop up under “Available Packs” as a download option. Like so:

I’m never without my trusty copy of Sax for Live. The rest I can download here.

Then once you’re there, you get a tool for experimenting with melodies, two virtual analog instruments (a Bass, and a polysynth with modulation and chorus), and effects (two delays, a limiter, an envelope processor, and a “spectral blur” reverb).

Have a look:

Melodic Steps is a note sequencer with lots of options for exploration.

Bass is a virtual analog monosynth, with four oscillators. (Interesting that this is the opposite approach taken by Native Instruments with the one-oscillator bass synth in Maschine.)

Poli is a virtual analog polysynth, basically staking out some more accessible ground versus the AAS-developed Analog already in Live.

Pitch Hack is a delay – here’s where things start to get interesting. You can transpose, reverse audio, randomize transposition interval, and fold the delayed signal back into the effect. If you’ve been waiting for a wild new delay from the company that launched with Grain Delay, this could be it.

Gated Delay is a second delay, combining a gate sequencer and delay. (Logic Pro 10.4 added some similar business via acquired developer Camel, but nice to have this in Live, too.)

Color Limited is modeled on hardware limiters.

Re-enveloper is a three-band, frequency dependent envelope processor. That gives you some more precise control of envelope on a sound – or you could theoretically use this in combination with other effects. Very useful stuff, so this could quietly turn out to be the tool out of this set you use the most.

Spectral Blur is perhaps the most interesting – it creates dense clouds of delays, which produce a unique reverb-style effect (but one distinct from other reverbs).

And the launch video:

All in all, it’s a nice addition to Ableton you can grab as a free update, and a welcome thank you to Live 10 adopters. I’m going to try some experimentation with the delays and re-enveloper, and I can already tell I’m going to be into this Spectral Blur. (Logic Pro’s ChromeVerb goes a similar direction, and I’m stupidly hooked on that, too.)

Creative Extensions: New in Live 10 Suite

If these feel a little pedestrian and vanilla to you – the world certainly does have a lot of traditional virtual analog – you might want to check out the other creations by this developer, Amazing Noises. They have something Granular Lab on the Max for Live side, plus a bunch of wonderful iOS effects. And you can always use an iPad or iPhone as an outboard effects processor for your Live set, too, taking advantage of the touch-centric controls. (Think Studiomux.)

https://www.ableton.com/en/packs/by/amazing-noises/

https://www.amazingnoises.com/

http://apps.amazingnoises.com/

If you’re a Max for Live user or developer and want to recommend one of your creations, too, please do!

Want some more quick inspiration / need to unstick your creative imagination today? Check out the Sonic Bloom Oblique Strategies. Here’s today’s:

And plenty more where that came from:

http://sonicbloom.net/en/category/oblique-strategies/

The post Ableton’s Creative Extensions are a set of free tools for sound, inspiration appeared first on CDM Create Digital Music.

zenAud.io intros ALK2 Solo sequenced loops for Mac

zenAudio ALK SolozenAud.io has launched a lite edition of its ALK2 sequenced live-looping DAW for Mac. Providing the essential features of ALK2 in a more affordable package, ALK2 Solo is aimed at the gigging solo performer Out of the box, users can expect audio looping, MIDI looping, Panic Mode, Magnetic Note Repeat, and ALK’s unique and forward-looking […]

The best news for iOS, macOS musicians and artists from WWDC

Apple’s WWDC, while focused on developers, tends to highlight consumer features of its OS – not so much production stuff. But as usual, there are some tidbits for creative Mac and iOS users.

Here’s the stuff that looks like good news, at least in previews. Note that Apple tends to focus on just major new features they want to message, so each OS may reveal more in time.

iOS 12

Performance. On both iPad and iPhone, Apple is promising big performance optimizations. They’ve made it sound like they’re particularly targeting older devices, which should come as welcome news to users finding their iThings feel sluggish with age. (iPhone 5s and iPad Air onwards get the update.)

A lot of this has to do with responsiveness when launching apps or bringing up the keyboard or camera, so it may not directly impact audio apps – most of which do their heavy work at a pretty low level. But it’s nice to see Apple improve the experience for long-term owners, not just show off things that are new. And even as Android devices boast high-end specs on paper, that platform still lags iOS badly when it comes to things like touch response or low-latency audio.

Smoother animation is also a big one.

Augmented reality. Apple has updated their augmented reality to ARKit 2. These are the tools that let you map 3D objects and visualizations to a real-world camera feed – it basically lets you hold up a phone or tablet instead of don goggles, and mix the real world view with the virtual one.

New for developers: persist your augmented reality between sessions (without devs having to do that themselves), object detection and tracking, and multi-user support. They’ve also unveiled a new format for objects.

I know AR experimentation is already of major interest to digital artists. The readiness of iOS as a platform means they have a canvas for those experiments.

There are also compelling music and creative applications, some still to be explored. Imagine using an augmented reality view to help visualize spatialized audio. Or use a camera to check out how a modular rack or gear will fit in your studio. And there are interesting possibilities in education. (Think a 3D visualization of acoustic waves, for instance.)

Both augmented reality and virtual reality offer some new immersive experiences musicians and artists are sure to exploit. Hey, no more playing Dark Side of the Moon to The Wizard of Oz; now you can deliver an integrated AV experience.

Google’s Android and Apple are neck and neck here, but because Apple delivers updates faster, they can rightfully claim to be the largest platform for the technology. (They also have devices: iPhone SE / 6s and up and 5th generation iPad and iPads Pro all work.) Google’s challenge here I think is really adoption.

Apple’s also pretty good at telling the story here:
https://www.apple.com/ios/augmented-reality/

That said, Google has some really compelling 3D audio solutions – more on this landscape soon, on both platforms.

Real Do Not Disturb. This is overdue, but I think a major addition for those of us wanting to focus on music on iOS and not have to deal with notifications.

Siri Shortcuts. This is a bit like the third-party power app Workflow; it allows you to chain activities and apps together. I expect that could be meaningful to advanced iOS users; we’ll just have to see more details. It could mean, for instance, handy record + process audio batches.

Voice Memos on iPad. I know a lot of musicians still use this so – now you’ve got it in both places, with cloud sync.

https://www.apple.com/ios/ios-12-preview/features/

macOS 10.14 Mojave

Dark Mode. Finally. And another chance to keep screens from blaring at us in studios or onstage – though Windows and Linux users have this already, of course.

Improved Finder. This is more graphics oriented than music oriented, of course – but creative users in general will appreciate the complete metadata preview pane.

Also nice: Quick Actions, which also support the seldom-used, ill-documented, but kind of amazing Automator. Automator also has a lot of audio-specific actions with some apps; it’s worth checking out.

There are also lots of nice photo and image markup tools.

Stacks. Iterations of this concept have been around since the 90s, but finally we see it in an official Apple OS release. Stacks organize files on your desktop automatically, so you don’t have a pile of icons everywhere. Apple got us in this mess in the 80s (or is that Xerox in the 70s) but … finally they’re helping us dig out again.

App Store and the iOS-Mac ecosystem. Apple refreshing their App Store may be a bigger deal than it seems. A number of music developers are seeing big gains on Apple mobile platforms – and they’re trying to leverage that success by bringing apps to desktop Mac, something that the Windows ecosystem really doesn’t provide. It sounds like Intua, creators of BeatMaker, might even have a desktop app in store.

And having a better App Store means that it’s more likely developers will be able to sell their apps – meaning more Mac music apps.

https://www.apple.com/macos/mojave-preview/

That’s about it

There’s of course a lot more to these updates, but more on either the developer side or consumer things less relevant to our audience.

The big question for Apple remains – what is their hardware roadmap? The iPad has no real rivals apart from shifting focus to Windows-native tablets like the Surface, but the Mac has loads of competition for visual and music production.

Generally, I don’t know that either Windows or macOS can deliver a lot for pro users in these kinds of updates. We’re at a mature, iterative phase for desktop OSes. But that’s okay.

Now, what we hope as always is that updates don’t just break our existing stuff. Case in point: Apple moving away from OpenCL and OpenGL.

But even there, as one reader comments, hardware is everything. Apple dropping OpenCL isn’t as big to some developers and artists as the fact that you can’t buy a machine with an NVIDIA card in it.

Well, we’ll be watching. And as usual, anything that may or may not break audio and music tools, we’ll find out only closer to release.

The post The best news for iOS, macOS musicians and artists from WWDC appeared first on CDM Create Digital Music.

Magix updates Vegas Pro 15 software to build 361

Magix Vegas Pro 15Magix has released an update of Vegas Pro 15, the video professional video and audio production software for Windows. Build 361 offers support for AMD GPU acceleration plus a long list of performance and stability improvements. VEGAS Pro has always been an innovator. Version 15 carries on this legacy and delivers a completely customizable interface […]

LABS is a free series of sound tools for everyone, and you’ll want it now

Everyone’s talking these days it seems about new users and finding an entry way into production. But Spitfire’s take is pretty irresistible: give you some essential sounds you can use anywhere, then charge you … nothing.

Spitfire Audio are a little bit like the “recording studio, engineers, and world-class rented orchestra” you … never had. These are exceptionally detailed sample libraries, including collaborations with Hans Zimmer and Ólafur Arnalds.

LABS is something different. Spitfire says they’re planning a series of these bite-sized sample library instruments, integrated as plug-ins for Mac and Windows / all DAWs. Since they are more focused, they’re also smaller – so we’re talking a few hundred megs instead of many gigs of content, meaning you also don’t have to think about plugging in an external drive just to install. A new build of their online tool goes and grabs them for you.

VST/AU/AAX/Mac/Windows … free:

https://www.spitfireaudio.com/labs/

Now, just describing it, that sounds not all that exciting – plenty of sample houses offer freebies to get you hooked. But LABS’ debut two entries are something special. There’s an intimate “soft piano” that’s good enough that I temporarily got distracted for half an hour playing even on my QWERTY keyboard in Abeton before I remembered I was supposed to be doing something. It’s beautiful and delicate with loads of sounds of the piano action and … it’s sort of hard not to make some film score with it. (Plugging in a hammer action keyboard was, of course, better.)

Grab those downloads … and more arrive every month. (Here seen alongside their paid libraries.)

There’s also an essentials string ensemble that covers the bread-and-butter articulations you need, exceptionally well recorded on a 40-piece ensemble.

All of this is wrapped into a minimalistic interface, made in collaboration with UsTwo – the folks who did the hit game Monument Valley. Spitfire tells me that something like six to nine months of work between them and UsTwo led to the final design.

Dial in specific settings using the minimal interfaces, designed by the creators of Monument Valley.

LABS has just these settings so far, but they’re already pretty engaging.

This minimalism, from sound selections to interface, almost demands experimentation. You know, there’s a reason so many keyboards have pianos at patch 00 – we’re all often imagining a piano sound or string sound in our heads. It’s just rare you get one you’d want to return to, which is what this is. And I suspect for more adventurous producers and electronic work, the perfectly-recorded, back-to-basics nature of these selections will be perfect for transformation. (So you can weave them into electronic textures, and reverse and chop them up and re-pitch them and so on – all with good source material.)

And, of course, the price is right.

LABS is part of a larger project, with more sounds coming, also for free, monthly. Spitfire promises both more of these basic starting points for new producers or musicians wanting to cover their basic ingredients (like the samples you’ll want on your internal SSD and not just the big external drive), plus a testing bed for experiments in sound design and projects they want feedback on. (It’s not a freemium model, then – it’s more like a free laboratory, rather like what I was discussing yesterday around modules in VCV Rack, but for soundware. I wonder if we’ll see this elsewhere.)

There’s also a content plan around the sounds, a notebook of projects and ideas to go with the LABS sound downloads.

It’s also nice to see soundware companies pushing to increase the value of live musicians and composers/sound designers, rather than engage in a race to the bottom. I’ve heard some real concerns around the industry about the subscription model for sounds, and whether it will do to sound designers and recording artists what Spotify and iTunes Music streaming have done to record labels – but it seems the players in this industry really are committed to finding models that do something different. (Getting into this is obviously a matter for another day.)

Here’s their statement, whether you buy into that or not:

It remains Spitfire’s ethos to use live performances where possible, but when up against time and budget, Spitfire is the next best thing. By paying the players and collaborators royalties, Spitfire Audio helps sustain an incredible part of our musical heritage at the same time as championing innovation within it.

The plug-ins are free forever, not for a limited time. We’ll be watching to see what’s next.

Download:

https://www.spitfireaudio.com/labs/

The post LABS is a free series of sound tools for everyone, and you’ll want it now appeared first on CDM Create Digital Music.