Want something vanilla, like a MIDI controller with a classic mixer layout or a bunch of pots? Or want something crazy – like a psycho-bright light-up show controller? Faderfox has you either way.
The one-person German boutique controller company Faderfox has been making clever controllers since some of the first days of doing that for software, and they keep getting better. Mathias – he really is just one guy – wrote with the latest. He’s Mathias is now shipping the first controllers in his “MODULE” line. The idea here is, you get to mix and match some simple options to build up a virtual mixing surface, for your hardware or software.
These are pre-configured to work with Ableton Live and Elektron’s boxes, and the form factor even matches the Elektron so you can arrange or rack them together neatly.
You can use the new MODULE line with any MIDI-enabled hardware or software, but fans of Elektron will notice something about the dimensions.
On the MX12, you get twelve fader strips – 12 faders, 24 pots, and 24 buttons. Those still send whatever you want, so you can control whatever hardware or software tool you wish (via control change, program change, all the goodies), either by manually creating templates or using MIDI learn to automatically assign them.
On the PC12, you get just pots – 72 of them. You could put those two together, or use them individually, or build a monster system by chaining these together.
You might get away with the generic one and some adjustment in your software, but there are up to 30 custom setups if not.
And each comes in an aluminum case with both two MIDI in and two MIDI out, plus USB. An extension port lets you connect to other stuff.
Oh, and this is cute and useful – these come with a dry erase marker and empty overlay, so you can mark up your controller and know what everything is. There’s even a matching stand. Different colored fader caps let you add additional visual feedback.
399EUR (before VAT) for each.
Okay, so that’s the practical – now let’s get to the impractical (but fun). Mathias has done various one-off custom controller builds, but the GT1 is the craziest, biggest yet – a light-up, 144 RGB LED show controller.
So, for anyone complaining about laptop performance behind a blue glow, uh… take this.
In addition to sending a cease-and-desist letter to a popular Chinese music gear site, Behringer are now taking rival manufacturer Dave Smith Instruments – and unnamed users of a popular forum – to court.
Last week, CDM reported that Behringer’s global entity, MUSIC Tribe, had sent a cease and desist letter to Chinese news site Midifan, threatening a criminal defamation lawsuit would be the next step. However, as of this writing, no lawsuit has been served.
CDM was tipped off today that court filings are available showing MUSIC GROUP (in the USA) have proceeded with legal action against Dave Smith Instruments and various defendants for libel per se, libel per quod, and product disparagement, in the state of California, seeking damages in excess of US$250,000. The filings are dated 9th of June 2018.
The twist here is that in addition to Dave Smith Instruments, the manufacturer, and employee Anthony Karavidas (an engineer at DSI), the lawsuit seeks damages from an additional twenty individuals posting in the same forum thread. Since the identity of those individuals is unknown, they’re named as “DOES 1-20.” In the words of the lawsuit, “the true names and capacities, whether individual, corporate, associate or otherwise … are unknown to Plaintiff.”
In other words, it’s possible someone reading this article just got sued in California but doesn’t know it yet. Uh… hi there, happy Tuesday.
Behringer name Dave Smith’s Prophet Rev2 as a competitor to the Behringer Deepmind 12 in the suit.
Court filings are available as public record of the San Francisco County Superior Court (that’s the state trial court of the county of San Francisco). Expect a large pile of legal findings from the two companies and their lawyers, but those are located here:
(All documents related to the proceeding are located under case CGC17559458.)
The lawsuit is directed exclusively at commentary published by DSI employees on the Gearslutz forums.
But to review: a selection of comments by a single engineer and twenty unnamed individuals has been turned into a quarter-million dollar-plus defamation claim against a manufacturer, an individual, and pseudonymous forum posters. That thread is still up – it reached the 153-page count before a Gearslutz moderator closed the discussion, on the 4th of July of 2017. One sample:
(Whereas some threads were initiated by forum user Uli Behringer himself, this one came from a third party, before it ballooned.)
Dave Smith Instruments declined to comment for this story.
What the lawsuit says
According to evidence presented in the lawsuit, Tony, appearing as Tonykara, wrote a series of messages in a thread in early 2017 on Gearslutz forums, and later identified himself as an engineer working for DSI when a user asked him who he was. In the same thread, DOES 1-20 [users identified only by handle] chime in with other sentiments tilted against Behringer. (This thread itself was not entirely one-sided – even in the court evidence provided, you’ll read other form posters criticizing Dave Smith Instruments and Tonykara.)
These observations range from general complaints about Behringer products copying other products or characterizing business practices as “underhanded,” to specific allegations – particularly, a post by Karavidas that claims the Behringer CT100 cable tester is a “blatant copy” of a product by Ebtech.
Some of these complaints may indeed be factually questionable or genuinely inaccurate. Other claims, however, would be harder to disprove. For instance, the lawsuit highlights a comment by Mike Hiegemann (aka Paul Dither) who says “it’s not a secret that Behringer has ripped off products in the past and is planning to do so in the future.” The lawsuit characterizes that as “false, defamatory, and libelous.”
It would be hard to prove or disprove what Behringer will do in the future (obviously), but note that past lawsuits by Roland and Mackie in fact claimed some past Behringer-branded products were deliberate copies. Whether or not those makers won those lawsuits, it means that they did product a significant amount of material evidence as a matter of public record.
Or to put it another way: if you go out and say CDM is a “crap site,” I really can’t do anything. Even if you say “CDM is a biased site that only does what it’s advertisers want,” ditto. I might disagree, but could I take you to court for libel? If you say “CDM is a crap site that’s just a bunch of archaic open source tools mixed with advertiser news made for aging music hipsters,” I … actually, okay I think I’m just now projecting. You get the point.
So, the next questions to answer appear to be, how truthful or untruthful were these statements? Can they be held as libelous? What damages would the authors owe MUSIC Group, if so? Is Dave Smith Instruments legally responsible for what one of its employees posted on a forum?
And I suspect most of interest to readers of this site, can Behringer unmask a series of people posting under pseudonyms and hold them responsible, as well?
There are three charges made in the lawsuit:
Libel per quod. Paraphrasing: claims about Behringer’s business practice and alleged history of copying other products are false and have hurt the company’s reputation. This category requires demonstrating specific legal damages in court.
Libel per se. This is a related set of claims, but because of US law forbidding attacking someone else’s business profession falsely, might not require damages. [Very big disclaimer: I’m not a lawyer. If I were a lawyer, I would probably advise you that you shouldn’t take this description as legal advice. But you can get this literally from what “per quod” and “per se” mean.]
Product disparagement. Here, because potential customers read these statements, and they refer to the Behringer brand and products, there’s a specific claim of damages to the brand and the products, beyond
If you can find your way through the court documents, you’ll find exhibits reproducing the complete forum thread, plus a cease and desist letter sent on the 7th of March 2017 – and an agreement by Tony Karavidas to comply with the letter.
There are a couple of things here that are unclear to me, which I will try to investigate.
One, reading through the lawsuit, I’m unclear as to the degree to which Karavidas may have violated the terms of the cease and desist. It appears that some message posts – as he attempted to continue to explain and/or complain about the situation – post-date an agreement to cease disparaging Behringer. It may be that failing to adequately respond to the cease and desist triggered the legal action, instead of defusing the issue.
Two, it’s unclear what will happen to other, pseudonymous posters to Gearslutz. The lawsuit says these “Does” 1-20 will be amended to the lawsuit once their identities are known. That may mean attempting to obligate the forum to reveal those identities. (Historical footnote: when Apple attempted to unmask sources and authors of stories on its leaked “Asteroid” audio interface over a decade ago, courts ruled it couldn’t, in a case called Apple versus Does. This is a different set of circumstances, but it gives some clue to how courts handle unidentified users in legal cases.)
Watching this case, however, may prove itself interesting. The law is intended to prevent damage to a profession – whether you’re one person or a big manufacturer – based on untrue claims. But this means two things, if the courts work correctly. On one hand, if false claims were made about Behringer, that will presumably come out. On the other hand, if Behringer are simply gagging criticism, and if industry complaints that their products are unfairly copying intellectual property, theoretically, that should come out, too.
And, of course, it’s possible for both of those scenarios to be true at once, depending on how this shakes out.
But for anyone who believed that defamation was some peculiarity of Chinese law last week, in fact US law and many international laws do hold individuals and publishers (like this one) legally responsible for damages if we make claims that are false. And yes, suffice to say, that could put a publisher out of business, on legal fees alone. That’s not a commentary on this case – that’s the reality of tort laws worldwide. And those laws exist to balance protections on free speech with the impact that speech can have as others.
Behringer had not yet responded to CDM’s request for comment as I published this.
Behringer and China
Late last week, I shared news that Chinese news portal Midifan had received a cease and desist letter from Behringer, via Music Tribe.
Midifan emphasized that the letter complained about products “copying” existing products, and in fact the letter from Music Tribe singled out coverage of Superbooth introductions of products with appearance, names, and structures based on the Sequential [Dave Smith] Pro One, Roland VC-330, SH-101, TR-808, and vintage modules, plus the ARP Odyssey. (Note that KORG had licensed the Odyssey and collaborated with its original creators; Behringer did not.)
Midifan and Music Tribe also clashed over reports by Midifan of a worker strike at Behringer’s MUSIC Tribe City manufacturing facility in Zhongshan, China.
Behringer has declined to comment publicly on CDM’s story. I did reach out to Uli Behringer directly over the weekend, and had a conversation, but got no further public comment.
Uli Behringer did post a statement to the MUSIC Tribe Academy Facebook group, which CDM shared via our own channels.
Far from the liberated playground the Internet once promised, online connectivity now threatens to give us mainly pre-programmed culture. As we continue reflections on AI from CTM Festival in Berlin, here’s an essay from this year’s program.
If you attended Berlin’s festival this year, you got this essay I wrote – along with a lot of compelling writing from other thinkers – in a printed book in the catalog. I asked for permission from CTM Festival to reprint it here for those who didn’t get to join us earlier this year. I’m going to actually resist the temptation to edit it (apart from bringing it back to CDM-style American English spellings), even though a lot has happened in this field even since I wrote it at the end of December. But I’m curious to get your thoughts.
The complete set of talks from CTM 2018 are now available on SoundCloud. It’s a pleasure to get to work with a festival that not only has a rich and challenging program of music and art, but serves as a platform for ideas, debate, and discourse, too. (Speaking of which, greetings from another European festival that commits to that – SONAR, in Barcelona.)
The image used for this article is an artwork by Memo Akten, used with permission, as suggested by curator and CTM 2018 guest speaker Estela Oliva. It’s called “Inception,” and I think is a perfect example of how artists can make these technologies expressive and transcendent, amplifying their flaws into something uniquely human.
Minds, Machines, and Centralisation: Why Musicians Need to Hack AI Now
IN THIS ARTICLE, CTM HACKLAB DIRECTOR PETER KIRN PROVIDES A BRIEF HISTORY OF THE CO-OPTING OF MUSIC AND LISTENING BY CENTRALIZED INDUSTRY AND CORPORATIONS, IDENTIFYING MUZAK AS A PRECURSOR TO THE USE OF ARTIFICIAL INTELLIGENCE FOR “PRE-PROGRAMMED CULTURE.” HE GOES ON TO DISCUSS PRODUCTIVE WAYS FOR THOSE WHO VALUE “CHOICE AND SURPRISE” TO REACT TO AND INTERACT WITH TECHNOLOGIES LIKE THESE THAT GROW MORE INESCAPABLE BY THE DAY.
It’s now a defunct entity, but “Muzak,” the company that provided background music, was once everywhere. Its management saw to it that their sonic product was ubiquitous, intrusive, and even engineered to impact behavior — and so the word Muzak became synonymous with all that was hated and insipid in manufactured culture.
Anachronistic as it may seem now, Muzak was a sign of how tele-communications technology would shape cultural consumption. Muzak may be known for its sound, but its delivery method is telling. Nearly a hundred years before Spotify, founder Major General George Owen Squier originated the idea of sending music over wires — phone wires, to be fair, but still not far off from where we’re at today. The patent he got for electrical signalling doesn’t mention music, or indeed even sound content. But the Major General was the first successful business founder to prove in practice that electronic distribution of music was the future, one that would take power out of the hands of radio broadcasters and give the delivery company additional power over content. (He also came up with the now-loathed Muzak brand name.)
What we now know as the conventional music industry has its roots in pianola rolls, then in jukeboxes, and finally in radio stations and physical media. Muzak was something different, as it sidestepped the whole structure: playlists were selected by an unseen, centralized corporation, then piped everywhere. You’d hear Muzak in your elevator ride in a department store (hence the phrase, elevator music). There were speakers tucked into potted plants. The White House and NASA at some points subscribed. Anywhere there was silence, it might be replaced with pre-programmed music.
Muzak added to its notoriety by marketing the notion of using its product to boost worker productivity, through a pseudo-scientific regimen it called the “stimulus progression.” And in that, we see a notion that presages today’s app behavior loops and motivators, meant to drive consumption and engagement, ad clicks and app swipes.
Muzak for its part didn’t last forever, with stimulus progression long since debunked, customers preferring licensed music to this mix of original sounds, and newer competitors getting further ahead in the marketplace.
But what about the idea of homogenized, pre-programmed culture delivered by wire, designed for behavior modification? That basic concept seems to be making a comeback.
Automation and Power
“AI” or machine intelligence has been tilted in the present moment to focus on one specific area: the use of self-training algorithms to process large amounts of data. This is a necessity of our times, and it has special value to some of the big technical players who just happen to have competencies in the areas machine learning prefers — lots of servers, top mathematical analysts, and big data sets.
That shift in scale is more or less inescapable, though, in its impact. Radio implies limited channels; limited channels implies human selectors — meet the DJ. The nature of the internet as wide-open for any kind of culture means wide open scale. And it will necessarily involve machines doing some of the sifting, because it’s simply too large to operate otherwise.
There’s danger inherent in this shift. One, users may be lazy, willing to let their preferences be tipped for them rather than face the tyranny of choice alone. Two, the entities that select for them may have agendas of their own. Taken as an aggregate, the upshot could be greater normalization and homogenization, plus the marginalization of anyone whose expression is different, unviable commercially, or out of sync with the classes of people with money and influence. If the dream of the internet as global music community seems in practice to lack real diversity, here’s a clue as to why.
At the same time, this should all sound familiar — the advent of recording and broadcast media brought with it some of the same forces, and that led to the worst bubblegum pop and the most egregious cultural appropriation. Now, we have algorithms and corporate channel editors instead of charts and label execs — and the worries about payola and the eradication of anything radical or different are just as well-placed.
What’s new is that there’s now also a real-time feedback loop between user actions and automated cultural selection (or perhaps even soon, production). Squier’s stimulus progression couldn’t monitor metrics representing the listener. Today’s online tools can. That could blow apart past biases, or it could reinforce them — or it could do a combination of the two.
In any case, it definitely has power. At last year’s CTM hacklab, Cambridge University’s Jason Rentfrow looked at how music tastes could be predictive of personality and even political thought. The connection was timely, as the talk came the same week as Trump assumed the U.S. presidency, his campaign having employed social media analytics to determine how to target and influence voters.
We can no longer separate musical consumption — or other consumption of information and culture — from the data it generates, or from the way that data can be used. We need to be wary of centralized monopolies on that data and its application, and we need to be aware of how these sorts of algorithms reshape choice and remake media. And we might well look for chances to regain our own personal control.
Even if passive consumption may seem to be valuable to corporate players, those players may discover that passivity suffers diminishing returns. Activities like shopping on Amazon, finding dates on Tinder, watching television on Netflix, and, increasingly, music listening, are all experiences that push algorithmic recommendations. But if users begin to follow only those automated recommendations, the suggestions fold back in on themselves, and those tools lose their value. We’re left with a colorless growing detritus of our own histories and the larger world’s. (Just ask someone who gave up on those Tinder dates or went to friends because they couldn’t work out the next TV show to binge-watch.)
There’s also clearly a social value to human recommendations — expert and friend alike. But there’s a third way: use machines to augment humans, rather than diminish them, and open the tools to creative use, not only automation.
Music is already reaping benefits of data training’s power in new contexts. By applying machine learning to identifying human gestures, Rebecca Fiebrink has found a new way to make gestural interfaces for music smarter and more accessible. Audio software companies are now using machine learning as a new approach to manipulating sound material in cases where traditional DSP tools are limited. What’s significant about this work is that it makes these tools meaningful in active creation rather than passive consumption.
AI, back in user hands
Machine learning techniques will continue to expand as tools by which the companies mining big data make sense of their resources — from ore into product. It’s in turn how they’ll see us, and how we’ll see ourselves.
We can’t simply opt out, because those tools will shape the world around us with or without our personal participation, and because the breadth of available data demands their use. What we can do is to better understand how they work and reassert our own agency.
When people are literate in what these technologies are and how they work, they can make more informed decisions in their own lives and in the larger society. They can also use and abuse these tools themselves, without relying on magical corporate products to do it for them.
Abuse itself has special value. Music and art are fields in which these machine techniques can and do bring new discoveries. There’s a reason Google has invested in these areas — because artists very often can speculate on possibilities and find creative potential. Artists lead.
The public seems to respond to rough edges and flaws, too. In the 60s, when researcher Joseph Weizenbaum attempted to parody a psychotherapist with crude language pattern matching in his program, ELIZA, he was surprised when users started to tell the program their darkest secrets and imagine understanding that wasn’t there. The crudeness of Markov chains as predictive text tool — they were developed for analyzing Pushkin statistics and not generating language, after all — has given rise to breeds of poetry based on their very weirdness. When Google’s style transfer technique was applied using a database of dog images, the bizarre, unnatural images that warped photos into dogs went viral online. Since then, Google has made vastly more sophisticated techniques that apply realistic painterly effects and… well, it seems that’s attracted only a fraction of the interest that the dog images did.
Maybe there’s something even more fundamental at work. Corporate culture dictates predictability and centralized value. The artist does just the opposite, capitalizing on surprise. It’s in the interest of artists if these technologies can be broken. Muzak represents what happens to aesthetics when centralized control and corporate values win out — but it’s as much the widespread public hatred that’s the major cautionary tale. The values of surprise and choice win out, not just as abstract concepts but also as real personal preferences.
We once feared that robotics would eliminate jobs; the very word is derived (by Czech writer Karel Čapek’s brother Joseph) from the word for slave. Yet in the end, robotic technology has extended human capability. It has brought us as far as space and taken us through Logo and its Turtle, even taught generations of kids math, geometry, logic, and creative thinking through code.
We seem to be at a similar fork in the road with machine learning. These tools can serve the interests of corporate control and passive consumption, optimised only for lazy consumption that extracts value from its human users. Or, we can abuse and misuse the tools, take them apart and put them back together again, apply them not in the sense that “everything looks like a nail” when all you have is a hammer, but as a precise set of techniques to solve specific problems. Muzak, in its final days, was nothing more than a pipe dream. What people wanted was music — and choice. Those choices won’t come automatically. We may well have to hack them.
PETER KIRN is an audiovisual artist, composer/musician, technologist, and journalist. He is the editor of CDM and co-creator of the open source MeeBlip hardware synthesizer (meeblip.com). For six consecutive years, he has directed the MusicMaker’s Hacklab at CTM Festival, most recently together with new media artist Ioann Maria.
It’s full of gun sounds. But because of a combination of a unique sample architecture and engine and a whole lot of unique assets, the Weaponiser plug-in becomes a weapon of a different kind. It helps you make drum sounds.
Call me a devoted pacifist, call me a wimp – really, either way. Guns actually make me uncomfortable, at least in real life. Of course, we have an entirely separate industry of violent fantasy. And to a sound designer for games or soundtracks, Weaponiser’s benefits should be obvious and dazzling.
But I wanted to take a different angle, and imagine this plug-in as a sort of swords into plowshares project. And it’s not a stretch of the imagination. What better way to create impacts and transients than … well, fire off a whole bunch of artillery at stuff and record the result? With that in mind, I delved deep into Weaponiser. And as a sound instrument, it’s something special.
Like all advanced sound libraries these days, Weaponiser is both an enormous library of sounds, and a powerful bespoke sound engine in which those sounds reside. The Edinburgh-based developers undertook an enormous engineering effort here both to capture field recordings and to build their own engine.
It’s not even all about weapons here, despite the name. There are sound elements unrelated to weapons – there’s even an electronic drum kit. And the underlying architecture combines synthesis components and a multi-effects engine, so it’s not limited to playing back the weapon sounds.
What pulls Weaponiser together, then, is an approach to weapon sounds as a modularized set of components. The top set of tabs is divided into ONSET, BODY, THUMP, and TAIL – which turns out to be a compelling way to conceptualize hard-hitting percussion, generally. We often use vaguely gunshot-related metaphors when talking about percussive sounds, but here, literally, that opens up some possibilities. You “fire” a drum sound, or choose “burst” mode (think automatic and semi-automatic weapons) with an adjustable rate.
This sample-based section is then routed into a mixer with multi-effects capabilities.
In music production, we’ve grown accustomed to repetitive samples – a Roland TR clap or rimshot that sounds the same every single time. In foley or game sound design, of course, that’s generally a no-no; our ears quickly detect that something is amiss, since real-world sound never repeats that way. So the Krotos engine is replete with variability, multi-sampling, and synthesis. Applied to musical applications, those same characteristics produce a more organic, natural sound, even if the subject has become entirely artificial.
Let’s have a look at those components in turn.
Gun sounds. This is still, of course, the main attraction. Krotos have field recordings of a range of weapons:
For those of you who don’t know gun details, that amounts to pistol, rifle, automatic, semiautomatic, and submachine gun (SMG). These are divided up into samples by the onset/body/thump/tail architecture I’ve already described, plus there are lots of details based on shooting scenario. There are bursts and single fires, sniper shots from a distance, and the like. But maybe most interesting actually are all the sounds around guns – cocking and reloading vintage mechanical weapons, or the sound of bullets impacting bricks or concrete. (Bricks sound different than concrete, in fact.) There are bullets whizzing by.
And that’s just the real weapons. There’s an entire bank devoted to science fiction weapons, and these are entirely speculative. (Try shooting someone with a laser; it … doesn’t really work the way it does in the movies and TV.) Those presets get interesting, too, because they’re rooted in reality. There’s a Berretta fired interdimensionally, for example, and the laser shotguns, while they defy present physics and engineering, still have reloading variants.
In short, these Scottish sound designers spent a lot of time at the shooting range, and then a whole lot more time chained to their desk working with the sampler.
Things that aren’t gun sounds. I didn’t expect to find so many sounds in the non-gun variety, however. There are twenty dedicated kits, which tend in a sort of IDM / electro crossover, just building drum sounds on this engine. There are a couple of gems in there, too – enough so that I could imagine Krotos following up this package with a selection of drum production tools built on the Weaponiser engine but having nothing to do with bullets or artillery.
Until that happens, you can think of that as a teaser for what the engine can do if you spend time building your own presets. And to that end, you have some other tools:
Variations for each parameter randomize settings to avoid repetition.
Four engines, each polyphonic with their own sets of samples, combine. But the same things that allow you different triggering/burst modes for guns prove useful for percussion. And yes, there’s a “drunk” mode.
A deep multi-effects section with mixing and routing serves up still more options.
Four engines, synthesis. Onset, Body, Thump, and Tail each have associated synthesis engines. Onset and Body are specialized FM synthesizers. Thump is essentially a bass synth. Tail is a convolution reverb – but even that is a bit deeper than it may sound. Tail provides both audio playback and spatialization controls. It might use a recorded tail, or it might trigger an impulse response.
Also, the way samples are played here is polyphonic. Add more samples to a particular engine, and you will trigger different variants, not simply keep re-triggering the same sounds over and over again. That’s the norm for more advanced percussion samplers, but lately electronic drum engines have tended to dumb that down. And – there’s a built-in timeline with adjustable micro-timings, which is something I’ve never seen in a percussion synth/sampler.
The synth bits have their own parameters, as well, and FM and Amplitude Modulation modes. You can customize carriers and modulators. And you can dive into sample settings, including making radical changes to start and end points, envelope, and speed.
Effects and mixing. Those four polyphonic engines are mixed together in a four-part mix engine, with multi-effects that can be routed in various ways. Then you can apply EQ, Compression, Limiting, Saturation, Ring Modulation, Flanging, Transient Shaping, and Noise Gating.
Oh, you can also use this entire effects engine to process sounds from your DAW, making this a multi-effects engine as well as an instrument.
Is your head spinning yet?
About the sounds
Depending on which edition you grab, from the limited selection of the free 10-day demo up to the “fully loaded” edition, you’ll get as many as 2228 assets, with 1596 edited weapon recordings. There are also 692 “sweeteners” – a grab bag of still more sounds, from synths to a black leopard (the furry feilne, really), and the sound recordists messing around with their recording rig, keys, Earth, a bicycle belt… you get the idea. There are also various impulse responses for the convolution reverb engine, allowing you to place your sound in different rooms, stairwells, and synthetic reverbs.
The recording chain itself is worth a look. There are the expected mid/side and stereo recordings, classic Neumann and Sennheiser mics, and a whole lot of use by the Danish maker DPA – including mics positioned directly on the guns in some recordings. But they’ve also included recordings made with the Sennheiser Ambeo VR Mic for 360-degree, virtual reality sound.
They’ve shared some behind-the-scenes shots with CDM, and there’s a short video explaining the process.
In use, for music
Some of the presets are realistic enough that it did really make me uncomfortable at first working with these sounds in a music project – but that was sort of my aim. What I found compelling is, because of this synth engine, I was quickly able to transform those sounds into new, organic, even unrecognizable variations.
There are a number of strategies here that make this really interesting.
You can mess with samples. Adjusting speed and other parameters, as with any samples, of course gives you organic, complex new sounds.
There’s the synthesis engine. Working with the synth options either to reprocess the sounds or on their own allows you to treat Weaponiser basically as a drum synth.
The variations make this sound like acoustic percussion. With subtle or major variations, you can produce sound that’s less repetitive than electronic drums would be.
Mix and match. And, of course, you have presets to warp and combine, the ability to meld synthetic sounds and gun sounds, to sweeten conventional percussion with those additions (synths and guns and leopard sounds)… the mind reels.
Routing, of course is vital, too; here’s their look at that:
In fact, there’s so much, that I could almost go on a separate tangent just working with this musically. I may yet do that, but here is a teaser at what’s possible – starting with the obvious:
But I’m still getting lost in the potential here, reversing sounds, trying the drum kits, working with the synth and effects engines.
The plug-in can get heavy on CPU with all of that going on, obviously, but it’s also possible to render out layers or whole sounds, useful both in production and foley/sound design. Really, my main complaint is the tiny, complex UI, which can mean it takes some time to get the hang of working with everything. But as a sound tool, it’s pretty extraordinary. And you don’t need to have firing shotguns in all your productions – you can add some subtle sweetening, or additional layers and punch to percussion without anyone knowing they’re hearing the Krotos team messing with bike chains and bullets hitting bricks and an imaginary space laser.
Weaponiser runs on a Mac or PC, 64-bit only VST AU AAX. You’ll need about five and a half gigs of space free. Basic, which is already pretty vast, runs $399 / £259/ €337. Full loaded is over twice that size, and costs $599 / £379 / €494.
Midifan, a top music portal and online magazine in China, has received notice from Behringer, threatening legal action over stories by Musicfan that called Behringer a “copycat.”
Midifan is a Chinese-language site, but evidently a significant one for that market. And Nan Tang, CEO and founder of the site, is also co-founder of 2nd Sense Audio, the software developer behind the WIGGLE synth and ReSample software. Nan, also known at musiXboy, contacted CDM with the news.
Nan has provided CDM with Midifan’s own English translation of the legal letter, as well as a statement in English. Translation is an important factor here, given we’re talking about libel, but Midifan’s English translations here for what they wrote are “shameless” and “copycat.”
Here’s the statement from Midifan:
Behringer sued Chinese media Midifan for saying them COPYCAT and shameless
Chinese portal website Midifan has received a lawyer’s letter from Behringer last week. Behringer claimed the fact that Midifan repeatedly reporting news about Behringer without any factual basis and using insulting words such as “copycat”, “shameless” has caused the reputation of the four clients (Uli Behringer, MUSIC Tribe Global Brands Ltd, Zhongshan Behringer Electronic Co., Ltd and Zhongshan Ouke Electronic Co., Ltd) to be seriously damaged.
The law firm worked for Behringer also claimed that they have reported to its local public security agency and plans to pursue legal responsibilities through criminal way.
A manufacturer taking legal action against music press for being critical or even calling it names is as far as I know fairly unprecedented. I’d almost call it shamel– actually, let’s just stick with “unprecedented.”
But it appears the letter is threatening criminal libel proceedings in China, not just civil charges. Criminal libel can carry more serious consequences; as reported in 2013 by The Guardian and Bloomberg, criminal libel in China can carry up to a three year prison sentence.
Ceci n’est pas une imitateur. Behringer showed … uh… tributes to the Roland SH-101, , Roland VC-330, Roland TR-808, ARP Odyssey, and Sequential Prophet One in Berlin last month.
That said, in China as internationally, the law states that something is only libelous if it’s untrue. The “copycat” reference refers to Behringer gear shown at Superbooth, for instance, that literally was designed to look and sound like classic instruments (Roland TR-808, Sequential Circuits Prophet One, etc.). “Shameless” is a matter of opinion. Arguably, too, sending cease and desist letters to media outlets because they called you shameless and a copycat would presumably also not be a great way to demonstrate you possess shame.
What might make Midifan different from other English-language sites that used similar language? It may be relevant that at the end of last year, Midifan reported on striking workers in a manufacturing facility Behringer owns, where labor complained about health issues. (That article has a number of photos, as well as English-language response from Behringer managers instructing workers to keep windows closed.)
For their part, Midifan have posted a response on their site (no English translation available):
Midifan tell CDM that they have removed all references to the words “copycat” and “shameless” and replaced them with “more neutral words like “TRIBUTE and CLONE.”
Here’s the full letter from Behringer as translated by Midifan into English.
In Relation to Urge You to Stop the Willful Infringement Behavior
Dear Sir or Madam,
Upon the entrustment of Zhongshan Behringer Electronic Co., Ltd (hereinafter referred to as Behringer Corporation), Zhongshan Ouke Electronic Co., Ltd (hereinafter referred to as Ouke Corporation), Uli Behringer and MUSIC Tribe Global Brands Ltd, Guangdong Baoxin Law Firm sends you the lawyer’s letter to your company on matters that urging you to stop the willful infringement behavior.
In accordance with the information and statements from four aforementioned clients, MUSIC Tribe Global Brands Ltd is the registered holder of the trademark “BEHRINGER”. On the basis of the authorization of MUSIC Tribe Global Brands Ltd, Ouke Corporation has the right to use the “BEHRINGER” trademark to engage in production and business activities within the scope of relevant authorizations. Behringer Corporation，whose English name also includes the word “Behringer”, is an affiliate enterprise of MUSIC Tribe Global Brands Ltd and Ouke Corporation.
Since 2017, you have continuously published articles such as “Exclusive breaking: Behringer’s recent crazy copycat stems from a trap of imitation chip more than a decade ago.” “, Can’t stop copycat: Behringer will make the Eurorack module next?” , “Shameless: Behinger exhibited copycat of TR-808, SH-101, Pro-One and Odyssey” on the website “https://www.midifan.com/”
Tencent WeChat public account “Midifan” without any factual basis, claiming that the above four principals have plagiarized the products of other companies. Beyond that, the fact that you repeatedly used insulting words such as “shameless”, “copycat” has caused the reputation of my four clients to be seriously damaged.
In view of this, Ouke Corporaiton has reported to its local public security agency and plans to pursue your legal responsibilities through criminal way. Meanwhile, the four principals entrusted me with this letter expressly:
Please be sure to remove all the insulting infringement articles four principals involved and other related information posted on the internet platform such as “https://www.midifan.com/” and Tencent WeChat public account “Midifan” , etc. within seven days of receipt of this letter, and issue a clarification announcement within the above-mentioned period to eliminate all adverse effects caused by the negative reputation of the four principals due to your inappropriate comments.
If you fail to perform the above obligations within the time limit, the four principals will continue pursuing your legal liabilities (including but not limited to
the criminal responsibility for defamation) through legal ways. All consequences and expenses resulting from this shall be borne by you.
In order to avoid inconvenience, please weigh the pros and cons and perform the above obligations in a timely manner!
CDM has reached out to Music Tribe / Behringer for comment via their public contact form, but has not yet received a response. Curiously, I found many of my colleagues don’t have direct, current media contacts with the company.
Oh, also – it seems Germany has criminal libel laws, too. So, naturally, let me then reiterate – what I saw at Superbooth were … meticulous recreations of famous electronic instruments of yore by a …. manufacturer of equipment that is … Behringer.
Now, please, I don’t want to go to German jail. Aber wenn ich ins Gefängnis gehe, wird sich mein Deutsch verbessern.
The future of soundware is clearly on-demand, crafted sounds from the cloud. Output adds a twist: don’t just give you new sounds, but give you a way to play them and make them your own.
So, the latest product from the LA-based sound boutique Output is called “Arcade” – so play, get it?
And it’s an early entry and fresh take on an area that’s set to heat up fast. To get things rolling here, your first 100 days are completely free; then you pay a monthly subscription rate of $10 a month (with cancellation whenever you want, and you don’t even lose access to your sounds).
As the number of producers grows, and the diversity of the music they may want to make seems to grow, too, as genres and niches spill over and transform at Internet speed, the need to deliver music makers sound and inspiration seems a major opportunity. We’re seeing subscription-based models (Native Instruments’ Sounds.com, Splice) and à la carte models (Loopmasters). And we’re seeing different ideas about how to organize releases (around genre, producer, sound house, or more curated selections around a theme), plus how to integrate tools for users.
Here’s where Arcade is interesting. It’s really a single, integrated instrument. And its goal is to find you exactly the sound you need, right away, easily — but also to give you the ability to completely transform that sound and make it your own, even loading your own found samples.
That’s important, because it bridges the divide between loops as a way of employing someone else’s content, and sound sampling as a DIY, personal affair, with a spectrum in between.
I suspect a lot of you reading have been all over that spectrum. Let’s consider even the serious, well-paid producer. You’ve got a tight scoring deadline, and the job needs a really particular sound, and you’re literally counting the minutes and sweating. Or… you’ve got a TV ad spot, and you need to make something sound completely original, and not like any particular preset you’ve heard before.
This also really isn’t about beginners or advanced users. An advanced user may have a very precise sound in mind – even to sit atop some meticulously hand-crafted sounds. And one of the first things a lot of beginners like to do is mess around with samples they record with a mic. (How many kids made noises into a Casio SK-1 in the 80s?)
I got to sit down with Output CEO and founder Gregg Lehrman, and we took a deep look at Arcade and had a long talk about what it’s about and what the future of music making might be. We’ll look more in detail at how you can use this as an instrument, but let’s cover what this is.
Choose your DAW – here’s Arcade running inside Ableton.
It’s a plug-in. This is important. You’ll always be interacting with Arcade as a plug-in, inside your host of choice – so no shuttling back and forth to a Website, as some solutions currently make you do. Omni-format – Mac AU / VST / VST3 / AAX, Windows VST / VST3 / AAX 32- and 64-bit. (Native Linux support would be nice, actually, but that’s missing for now.)
Sounds can match your tempo and key. You can hear sounds in their original form, or conform to the tempo and pitch that matches your current project. (Loopmasters does this too, actually, but via a separate app combined with a plug-in, which is a bit clunky.)
Browse through curated collections of sounds, which are paid for by subscription, Spotify/Netflix-style.
It lets you find sounds online. On-demand cloud browsing lets you check out selections of sounds, complete kits, and loops. You can preview all of these right away. Now, Netflix-style, Output promises new stuff every day, so you can browse around for something to inspire you if you’re feeling stuck. And at least in the test, these were organized with a sense of style and character – more like looking at the output of a music label than the usual commodity catalog of samples.
Search, browse, tagging, and the usual organizing tools are there, too – but it’s probably the preview and curation that puts this over the top.
— but it works if you’re offline, too. Prefer to switch the Internet off in your studio to avoid distractions? Work in an underground bunker, or in the hollowed out volcano island you use as an evil lair? Happily, you don’t need an Internet connection to work.
The keyboard (or whatever MIDI controller you’ve mapped) triggers loops, but also manipulates them on the fly. That lets you radically transform samples as you play – including your own sounds.
You can play the loops as an instrument. Okay, so the whole reason we went into music presumably is that we love the process of making music. Output excels here by letting you load loops into a 15-voice synth, then mangle and warp and modify the sound. It works really well from a keyboard or other MIDI controller.
This isn’t a sample playback instrument in the traditional sense, in terms of how it maps to pitch. Instead, white notes trigger samples, and black notes trigger modifiers. That’s actually really crazy to play around with, because it feels a little like you’re doing production work – the usual mouse-based chores of editing and modifying samples – as you play along, live.
There’s also input quantize, in case your keyboard skills aren’t so precise.
There are tons of modifiers and modulation and effects. Like all of Output’s products, the recipe is, build a deep architecture, then encapsulate it in an easy interface. That way, you can mess around with a simple control and make massive changes, which gets you discovering possibilities fast, but also go further into the structure if you want to get more specific about your needs, and if you’re willing to invest more time.
In this case, Arcade is built around four main sliders that control the character of the sound, both subtle and radical, and then another eleven effects and a deep mixing, routing, and modulation engine underneath.
So, let’s get into specifics.
Each Loop Voice – up to 15 of them – has a whole bunch of controls. It really would be fair to call this a synth:
• Multimode Filter with Resonance and Gradual/Steep Curve
• Volume, Pan, Attack/Release and Loop Crossfade
• Speed Control (1/4, 1/2, x1, x2)
• Tune Control (+/- 24)
• Loop Playback (Reverse, Pendulum, Loop On/Off, Hold)
• FX Sends Pre/Post x2
• Modifier Block
• Sync On/Off
There’s also a time/pitch stretch engine with both “efficient” and resource-intensive “high quality” modes:
• BPM & Time signature Control
• Key Signature control
• Formant Pitch Control
Since the point is playing, you can map to velocity sensitivity, too, so how hard you hit keys impacts filter cutoff and resonance, voulme and formant.
But you have stuff that can do all the above. It’s the modifiers that get interesting – little macros that are accessible as you play:
• ReSequence (16 steps with Volume, Mute, Reverse, Speed, Length and Attack
Decay control per step)
• Playhead (Speed, Reverse, Loop On/Off, Cup Point per Loop)
• Repeater (Note Repeat with Rate, Reverse, Volume Step Sequencer)
Session Key Control
Plus there’s a Resequencer for sequencing sound slices into new combinations.
The Resequencer gives you even more options for manipulating audio content and turning it into something new.
– combined with modulation:
• LFO/Step (x2) Sync/Free mode with Retrigger and Rate.
• Waveshape Control
• Attack, Phase, Chaos and Polarity Control
Deep modulation options either power presets – or your own sound creations, if you’re ready to tinker.
And there’s a complete mixer:
• 15 Channel Mixer with Volume, Pan, Pre/Post Send FX(x2), Solo
• Send Bus (x2) with Volume, Pan and Mute
• 2 insert FX slots per Bus
• Master Bus with Volume, Pan, Mute and 4 Insert FX slots
The Mixer combines up to 16 parts.
Plus a whole mess of effects. Those effects helped define the character of earlier Output instruments, so it’s great to see here:
It wouldn’t be an Output product without some serious multi-effects options.
But if Output likes to pitch itself as the “secret sauce” behind everything from Kendrick Lamar to the soundtracks for Black Panther and Stranger Things, I absolutely adore that you can load your own samples.
Native Instruments has built a great ecosystem around their tools, including Kontakt – and Output have made use of that engine. But it’s great to see this ground-up creation that introduces some different paradigms around want to do with sampled sound. That instant access to playing – to tapping into your muscle memory, your improvisation skills – I think could be really transformative. We’ve seen artists like Tim Exile advocate this approach in how he works, and it’s an element in a lot of great improvisers’ electronic work. What Output have done is allow you to combine sound discovery with instant playability.
The subscription model means you don’t have to reach for your credit card when you find sounds you want. But if you cancel the $10 a month subscription, unlike a Spotify or Netflix, you don’t lose access to your sounds. Output says:
If you open an older session, you will be prompted to log in, and you will not be able to click past the log in screen. You will be able to play back any MIDI or automation data recorded in your saved session. It will sound exactly the same, but you won’t be able to browse or tweak the character of the sound within the plugin. The midi can still be changed as the preset stays loaded in a session as long as you don’t uninstall Arcade which will remove all the audio samples. The best way to see what I mean is to test it yourself. Put ARCADE into a midi track, then log out of the plug-in. With the GUI still open albeit stuck on the log-in screen, play your track and hit some keys.
The fact that it’s all powered by subscription also means you’ll always have something there to use. But I do hope for the sake of sound creators – and because this engine is so cool – that Output also consider à la carte purchasing of some sounds selections. That could support more intensive sound design processes. And the interface looks like it’d work well as a shop, too. I share some of the concerns of sound artists that subscription models could hurt sound design in the way that they have music downloading. And — on the other hand, to use downloading as an example, a lot of us have both a subscription and buy a tone of stuff from Bandcamp, including underground music.
Let us know what you think.
I’ll be back with a guide to how to load your own sounds and play this as an instrument / design and modify sounds in a more sophisticated way.
Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.
Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.
I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.
Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.
And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.
All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.
These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.
Let’s have a look at our four speakers.
Machine learning and neural networks
Moritz Simon Geist: speculative futures
Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.
Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs
Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.
In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.
“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”
Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)
Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.
What if self-transformation – or even fame – were in a pill?
Gene Cogan: future dystopias
Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.
Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music
Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.
“This is probably going to be the most depressing talk.”
In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.
He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:
“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”
References: GRUV, a generative model for producing music
WaveNet, the DeepMind tech being used by Google for audio
Who he is:A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist
Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom
Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.
“You are not working them; they are working you.”
As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:
“We can’t get access or critique; they’re made in places that resemble prisons.”
The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:
“[It] isn’t a constant; it’s really about power and space.”
Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.
Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.
What John Cage can teach us: silence is never neutral, and neither is data.
Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)
But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:
“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”
Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.
It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.
Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).
Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)
And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.
But what is this stuff actually for?
That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.
They’ve got two apps now, one for VR, and one for AR.
Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:
Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).
But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.
The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.
The results can be totally crazy. Here’s one example:
Pitchfork go into some detail as to how this app came about:
We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.
It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.
And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!
And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:
One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.
Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.
Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)
Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)
We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”
Check out that whole article, as it’s also a great read:
Remember when we were sold on everything being clean and digital? Now it’s just about grime and filth. But if you were wondering where to start with Novation’s cute, dirty Circuit Mono Station, they’ve got a series of hands-on videos to get you going.
Some back story: the Mono Station is the follow up to the first Circuit. Like the original, it’s a square-ish looking box with a colored grid as its center. But whereas the original Circuit concealed a digital polysynth and drum machine (with the ability to load your own samples), the Mono Station is all about analog synthesis. That means it also has additional controls, and unlike the mysterious macro encoders on the first Circuit, the Mono Station’s knobs and faders and bits actually have labels. So you can read a label with words on it, and you know, maybe have a better idea what you’re doing. Or you can just ignore that and give it a try anyway.
The “How to filth” series runs through a set of fairly practical ideas to get you going.
It’s really rather a nice way to get a manual. There’s no lengthy explanation, no theory – and no sitting through a really long tutorial. Just watch a few steps, and then see if you can copy more or less what they’ve done. That should help you dive straight in. And if you’re on the fence about the Circuit Mono Station, this gives you some stuff to go try if you’re borrowing a friend’s hardware or going to the shops.
Here’s the full series:
This is a great one for summer, too, as Circuit and Circuit Mono Station are nicely portable.
What do you think? Is this sort of thing useful to you? Would you want to see more / something different? Let us know; it’s great to get feedback from readers on what’s making you musically productive. And if you make some tunes with us, send us those, too!
Here’s our story on the instrument, at launch. Some time later, it’s still holding up at that price point – and it’s not a clone or throwback, either, but a totally new instrument, designed by some nice people in England. (I know – I’ve met them! And they’re musicians, as well, of course!)
Patching music and visuals is fun, but it helps to learn from other people. With everything from apps (Audulus) to modulars (Softube, VCV Rack) to code and free software (Pd, SuperCollider, Bela), patchstorage is like a free social network for actually making stuff.
It’s funny that we needed international scandal, political catastrophe, numerous academic studies of depression, and everyone’s data getting given away before figuring it out – Facebook isn’t really solving all our problems. But that opens up a chance for new places to find community, learn from each other, and focus on the things we love, like making live music and visuals.
Enter Patchstorage. Choose a platform you’re using – or maybe discover one you aren’t. (Cabbage, for instance, is a free platform for making music software based on Csound.
If you’re a newcomer, you can attempt to just load this up and make sound. And a lot of these patches are made for free environments, meaning you don’t have to spend money to check them out. If you’re a more advanced user, of course, poking through someone else’s work can help you get outside your own process. And there are those moments of – “oh, I didn’t know this did that,” or “huh, that’s how that works.”
There are also, naturally, a ton of creations for VCV Rack, the free and open source Eurorack modular emulation we’ve been going on about so much lately.
Oh, yeah, and — another thing. This doesn’t use Facebook as its social network. Instead, chats are powered by gamer-friendly, Slack-like chat client Discord. That means a new tool to contend with when you want to talk about patches, but it does mean you get a focused environment for doing so. So you won’t be juggling your ex, your boss, some spammers, and propaganda bots in the middle of an environment that’s constantly sucking up data about you.