Relive Legowelt’s radio show, Astro Unicorn Radio

For a few glorious years, Legowelt had a radio show, Thursday evenings on Intergalactic FM internet radio. But while the show is gone, the sounds live on.

Why am I bringing this up now? Well … I owe that notion to Xeni Jardin of Boing Boing, back in the heyday of the blog from whence this site came. Any extended period of, say, reading legal filings surely deserves a unicorn chaser.

And Legowelt comes to our rescue.

The show ran from 2007-2011, and was as eclectic and glorious as you’d expect from Legowelt. Brazilian Moog Cruisin’? Nigerian boogie disco? Check. Or, for instance:

Another radio reportage, this time from the cold snowy Rotterdam were we investigate Mono-Poly’s & Dr.Albert Putnam’s research in Biorhythms using modular synthesizers such as the Fenix and Buchla.

It’s a perfect template of what nerdy music things should be.

There’s a full archive of the tail end of the show in MP3 form, which you can grab as long as it lasts.

http://www.moosleybay.com/astro.htm

Episodes are on Mixcloud, too, from the source – from the beginning:

You’re welcome.

And thanks, Legowelt.

The post Relive Legowelt’s radio show, Astro Unicorn Radio appeared first on CDM Create Digital Music.

Behringer responds to reports, defends reverse engineering

MUSIC TRIBE and Behringer responded early today to CDM’s request for comment, following revelations that company had targeted a Chinese website and Dave Smith Instruments with threatened or real legal action over criticism of the company’s business practices.

Uli Behringer, company CEO and founder of holding company MUSIC TRIBE, shared the following, which I’ve included in its entirety. (He also shared the same message to their Facebook group.)

In the message, Behringer doubles down on the claim that comments posted by a Dave Smith Instruments employee to the Gearslutz forum, as well as by Chinese news site Midifan, are false and constitute illegal defamation. He also defends the practice of what he describes as “reverse engineering” in their product development process.

Here’s their side of the story, as represented to us:

Hi Peter,

Thank you for reaching out and giving us an opportunity to respond in detail which we appreciate.

This is actually a first in our history with CDM and we welcome the change. As usual there are always two sides to any story and in the spirit of transparency and fairness we believe both sides should be heard. Since much revolves around “Defamation,” please find a quick Wiki link.

https://en.wikipedia.org/wiki/Defamation

Chinese Media Case
Allow me to first comment on the previous story related to the Chinese Media case. While you had claimed to have reached out to us for comments, there is no such record in any of our systems. You only contacted me and Michael Lapke last weekend after the news was already a week old.

Let me start by saying that we don’t have any problem with people criticizing us. In fact we appreciate constructive criticism as that’s the only way to learn. What we have a problem with is when our employees are being called highly offensive and insulting names by media outlets. Unfortunately your article did not properly reflect the full content and background of the language used, which in the Chinese culture has a highly different sensitivity and legality.

This was not only raised by our Chinese colleagues but also customers of this media site who felt compelled to contact us. Also publishing pictures of a cancer-fighting colleague in a hospital bed has caused deep concerns among our people.

We sent the owner of the publishing site a Cease-and-Desist letter, but he was never sued as wrongly reported. We have since spoken with the publisher and they have promised to remove the offensive language and refrain from posting such slur in the future. We consider this case to be resolved and he also has standing invitation to visit us.

Since our employee welfare and integrity has been severely questioned by this Chinese magazine and whose accusations have later been repeated by CDM and other publishers without fact checking, I like to post a link to a local job portal that may give you a different impression. We also invited you Peter (and everyone else) to visit us, both in Manchester and Zhongshan.

We are very proud that we have been ranked Zhongshan’s No. 1 employer by the leading and independent job site (http://www.jobui.com/company/35895/)

Our factory MUSIC Tribe City is ranked:

· No 1 most popular electronics company
· No 1 most popular recruiting company
· No 1 most employee caring company

I am very proud of our local leaders who go out of their way to make a difference for our employees. If you like to learn more about our MUSIC Tribe City here is a video.

DSI Case

Some time ago an employee of DSI had posted incorrect and slanderous statements about our company on multiple forums. We put both the employee as well as DSI on notice and received a signed Cease-and-Desist letter from the employee where he assured us that he would refrain from such future comments. I have attached a copy of the undertaking of the employee to stop making such comments. In the reply of DSI, the company stated that it has instructed all employees to stop making any false or derogatory statements against us.

It is important to understand that this is not a legal action against a mere individual but a representative of a competitor. Any such false and disparaging comments made by DSI’s employee, are damaging and inappropriate in a highly competitive market such as ours. Unfortunately and despite the signed declaration, the individual working for DSI chose to continue to make such claims and hence we were forced to take legal action. If the employee had stopped his actions as agreed, the case would have never been field. While I am not a lawyer, I can only assume that including 20 “John Does” is part of a standard legal procedure to include other potential individuals related to the company. For clarity purposes, this case has nothing to with any particular forum or individuals other than those related to DSI.

Misconception around IP

Allow me to post an article about IP (Intellectual Property) as this is an important one to us. Especially because we have been accused of not honoring the IP of other manufacturers. I have heard and read over the years many accounts of lawsuits, judgments and sanctions against our company that are frankly based in fiction and not fact.

Technology is free for anyone to use unless it is protected

This is the fundamental principle of every industry and how we as a society progress and evolve. Imagine there was only one car or guitar manufacturer. I welcome this opportunity to set the record straight not only on past cases but to also clarify our view on IP and what constitutes fair competition as well.

About 30 years ago, as a small garage operation, we became involved in a patent dispute with Aphex over a processor we were building. At that time there were several companies who produced those exciters, such Akai, SPL, D&R, etc. Our patent attorney advised us that the Aphex patent was invalid and I also applied for my own patent (DE3904425), with sponsorship from the acclaimed Fraunhofer Institute, the inventors of MP3. Despite assurances and our own beliefs, we ended up in court where the judge ruled in Aphex’s favor and we lost the case. We paid damages and moved on.

This case illustrates very clearly what I came to understand over the ensuing nearly 30 years about patents and IP. Disputes over intellectual property are commonplace in many industries and especially so in the technology industry. IP is a grey area, as it deals with patents, trade dress, copyrights, designs etc. where not much is black and white.

Just look at cases with Roland versus InMusic, Gibson versus PRS, Peavey versus QSC, Microsoft, Blackberry, Yahoo, Google, Samsung, Apple etc. Lawsuits are often used as “guerilla tactics” and especially common in the US where legal fees are sky high and each party has to pay its own fees regardless of the outcome of the case. This, along with the fact that IP litigation is often used as a tool to push a competitor out of business, are reasons why there are so many cases in this area of law.

Misconceptions around IP

One needs to be clear about the distinction between blatantly copying someone else’s product and the principle of reverse engineering. Copying a product 1:1 is clearly illegal, however reverse engineering is something that takes place every day and is accepted as part of a product development process known as benchmarking.

Often one company will establish a new market opportunity for a unique product and others will follow with their versions of that pioneering product. Think iPhone followed by Samsung Galaxy. This is the principle of competition.

The Article from Berkeley Law School gives a great read and provides valuable background information. A quick excerpt demonstrates why public opinion often differs from the law.

“Reverse engineering has a long history as an accepted practice. Lawyers and economists have endorsed reverse engineering as an appropriate way for firms to obtain information about another firm’s product, even if the intended result is to make a directly competing product that will draw away customers from the maker of the first product.”

One of the cases that endures in people’s memories is when we were sued by Mackie over alleged infringement of their IP. After a series of very costly and bitter court cases which we all won, Mackie reached out to us for a settlement which did not involve any money. It was proven in court that we had not copied their schematics or PCB layouts, nor had we infringed on any patents as there were none. Nor had there ever been any legal cases brought by BBE, dbx or Drawmer as claimed by Mackie as part of their marketing campaign against us and which was later erroneously reported by Wikipedia and even CDM.

In our first two decades, most of our products were designed to follow market leaders with similar features and appearance, at a lower cost. This value proposition upset many of our competitors while at the same time earning us a huge fan base among customers. I fully understand that many of those competitors would be frustrated by our ability to deliver equivalent or better products at significantly lower prices and that is the source of much of the anger directed at us by them.

Since the Aphex case we have been sued several times and we equally had to sue competitors over infringement of our IP. This happens in every industry and is part of a fierce and competitive landscape.

However, to be clear, we have not lost any substantial IP case since the Aphex case 30 years ago and legal cases are a matter of public record.

We are committed to never engage in any activity that willfully infringes on the intellectual property rights of any company or individual. However, we are also aware that legal wrangling will continue as we press on with our philosophy of delivering the best products at the lowest possible cost.

We welcome criticism

I am a big believer in free speech and welcome any form of constructive criticism, as this is the only way for us to learn and improve. We also don’t mind any comments made or language used by individuals as this is a matter of personal choice.

It becomes sensitive when incorrect or defamatory statements are made by competitors and the media. While there is free speech, words do have consequences and since we are all bound by the law, the rules should be applied equally to everyone.
Once again, I understand that people have their opinions and preferences and I fully respect that. I also understand that some people don’t like me or our company, and chose not to buy our products which I respect, too.

Since we started our company 30 years ago, we have always carefully listened to our customers and built what they wanted us to build. Sometimes people would request us to improve an existing product in the market, sometimes they would come up with a complete new idea. In fact many of the ideas for our most successful products have actually come from our customers and for that we are immensely grateful.

However, we are also aware that legal wrangling will continue as we press on with our philosophy of delivering the best products at the lowest possible cost.

This is the philosophy I started the company on 30 years ago, and this is the philosophy that will carry us into the future.

Thanks for listening.

Uli

Pictured: a mock-up of Music Tribe City.

The post Behringer responds to reports, defends reverse engineering appeared first on CDM Create Digital Music.

Minds, machines, and centralization: AI and music

Far from the liberated playground the Internet once promised, online connectivity now threatens to give us mainly pre-programmed culture. As we continue reflections on AI from CTM Festival in Berlin, here’s an essay from this year’s program.

If you attended Berlin’s festival this year, you got this essay I wrote – along with a lot of compelling writing from other thinkers – in a printed book in the catalog. I asked for permission from CTM Festival to reprint it here for those who didn’t get to join us earlier this year. I’m going to actually resist the temptation to edit it (apart from bringing it back to CDM-style American English spellings), even though a lot has happened in this field even since I wrote it at the end of December. But I’m curious to get your thoughts.

I also was lucky enough to get to program a series of talks for CTM Festival, which we made available in video form with commentary earlier this week, also with CTM’s help:
A look at AI’s strange and dystopian future for art, music, and society

The complete set of talks from CTM 2018 are now available on SoundCloud. It’s a pleasure to get to work with a festival that not only has a rich and challenging program of music and art, but serves as a platform for ideas, debate, and discourse, too. (Speaking of which, greetings from another European festival that commits to that – SONAR, in Barcelona.)

The image used for this article is an artwork by Memo Akten, used with permission, as suggested by curator and CTM 2018 guest speaker Estela Oliva. It’s called “Inception,” and I think is a perfect example of how artists can make these technologies expressive and transcendent, amplifying their flaws into something uniquely human.

Minds, Machines, and Centralisation: Why Musicians Need to Hack AI Now

IN THIS ARTICLE, CTM HACKLAB DIRECTOR PETER KIRN PROVIDES A BRIEF HISTORY OF THE CO-OPTING OF MUSIC AND LISTENING BY CENTRALIZED INDUSTRY AND CORPORATIONS, IDENTIFYING MUZAK AS A PRECURSOR TO THE USE OF ARTIFICIAL INTELLIGENCE FOR “PRE-PROGRAMMED CULTURE.” HE GOES ON TO DISCUSS PRODUCTIVE WAYS FOR THOSE WHO VALUE “CHOICE AND SURPRISE” TO REACT TO AND INTERACT WITH TECHNOLOGIES LIKE THESE THAT GROW MORE INESCAPABLE BY THE DAY.

It’s now a defunct entity, but “Muzak,” the company that provided background music, was once everywhere. Its management saw to it that their sonic product was ubiquitous, intrusive, and even engineered to impact behavior — and so the word Muzak became synonymous with all that was hated and insipid in manufactured culture.

Anachronistic as it may seem now, Muzak was a sign of how tele-communications technology would shape cultural consumption. Muzak may be known for its sound, but its delivery method is telling. Nearly a hundred years before Spotify, founder Major General George Owen Squier originated the idea of sending music over wires — phone wires, to be fair, but still not far off from where we’re at today. The patent he got for electrical signalling doesn’t mention music, or indeed even sound content. But the Major General was the first successful business founder to prove in practice that electronic distribution of music was the future, one that would take power out of the hands of radio broadcasters and give the delivery company additional power over content. (He also came up with the now-loathed Muzak brand name.)

What we now know as the conventional music industry has its roots in pianola rolls, then in jukeboxes, and finally in radio stations and physical media. Muzak was something different, as it sidestepped the whole structure: playlists were selected by an unseen, centralized corporation, then piped everywhere. You’d hear Muzak in your elevator ride in a department store (hence the phrase, elevator music). There were speakers tucked into potted plants. The White House and NASA at some points subscribed. Anywhere there was silence, it might be replaced with pre-programmed music.

Muzak added to its notoriety by marketing the notion of using its product to boost worker productivity, through a pseudo-scientific regimen it called the “stimulus progression.” And in that, we see a notion that presages today’s app behavior loops and motivators, meant to drive consumption and engagement, ad clicks and app swipes.

Muzak for its part didn’t last forever, with stimulus progression long since debunked, customers preferring licensed music to this mix of original sounds, and newer competitors getting further ahead in the marketplace.

But what about the idea of homogenized, pre-programmed culture delivered by wire, designed for behavior modification? That basic concept seems to be making a comeback.

Automation and Power

“AI” or machine intelligence has been tilted in the present moment to focus on one specific area: the use of self-training algorithms to process large amounts of data. This is a necessity of our times, and it has special value to some of the big technical players who just happen to have competencies in the areas machine learning prefers — lots of servers, top mathematical analysts, and big data sets.

That shift in scale is more or less inescapable, though, in its impact. Radio implies limited channels; limited channels implies human selectors — meet the DJ. The nature of the internet as wide-open for any kind of culture means wide open scale. And it will necessarily involve machines doing some of the sifting, because it’s simply too large to operate otherwise.

There’s danger inherent in this shift. One, users may be lazy, willing to let their preferences be tipped for them rather than face the tyranny of choice alone. Two, the entities that select for them may have agendas of their own. Taken as an aggregate, the upshot could be greater normalization and homogenization, plus the marginalization of anyone whose expression is different, unviable commercially, or out of sync with the classes of people with money and influence. If the dream of the internet as global music community seems in practice to lack real diversity, here’s a clue as to why.

At the same time, this should all sound familiar — the advent of recording and broadcast media brought with it some of the same forces, and that led to the worst bubblegum pop and the most egregious cultural appropriation. Now, we have algorithms and corporate channel editors instead of charts and label execs — and the worries about payola and the eradication of anything radical or different are just as well-placed.

What’s new is that there’s now also a real-time feedback loop between user actions and automated cultural selection (or perhaps even soon, production). Squier’s stimulus progression couldn’t monitor metrics representing the listener. Today’s online tools can. That could blow apart past biases, or it could reinforce them — or it could do a combination of the two.

In any case, it definitely has power. At last year’s CTM hacklab, Cambridge University’s Jason Rentfrow looked at how music tastes could be predictive of personality and even political thought. The connection was timely, as the talk came the same week as Trump assumed the U.S. presidency, his campaign having employed social media analytics to determine how to target and influence voters.

We can no longer separate musical consumption — or other consumption of information and culture — from the data it generates, or from the way that data can be used. We need to be wary of centralized monopolies on that data and its application, and we need to be aware of how these sorts of algorithms reshape choice and remake media. And we might well look for chances to regain our own personal control.

Even if passive consumption may seem to be valuable to corporate players, those players may discover that passivity suffers diminishing returns. Activities like shopping on Amazon, finding dates on Tinder, watching television on Netflix, and, increasingly, music listening, are all experiences that push algorithmic recommendations. But if users begin to follow only those automated recommendations, the suggestions fold back in on themselves, and those tools lose their value. We’re left with a colorless growing detritus of our own histories and the larger world’s. (Just ask someone who gave up on those Tinder dates or went to friends because they couldn’t work out the next TV show to binge-watch.)

There’s also clearly a social value to human recommendations — expert and friend alike. But there’s a third way: use machines to augment humans, rather than diminish them, and open the tools to creative use, not only automation.

Music is already reaping benefits of data training’s power in new contexts. By applying machine learning to identifying human gestures, Rebecca Fiebrink has found a new way to make gestural interfaces for music smarter and more accessible. Audio software companies are now using machine learning as a new approach to manipulating sound material in cases where traditional DSP tools are limited. What’s significant about this work is that it makes these tools meaningful in active creation rather than passive consumption.

AI, back in user hands

Machine learning techniques will continue to expand as tools by which the companies mining big data make sense of their resources — from ore into product. It’s in turn how they’ll see us, and how we’ll see ourselves.

We can’t simply opt out, because those tools will shape the world around us with or without our personal participation, and because the breadth of available data demands their use. What we can do is to better understand how they work and reassert our own agency.

When people are literate in what these technologies are and how they work, they can make more informed decisions in their own lives and in the larger society. They can also use and abuse these tools themselves, without relying on magical corporate products to do it for them.

Abuse itself has special value. Music and art are fields in which these machine techniques can and do bring new discoveries. There’s a reason Google has invested in these areas — because artists very often can speculate on possibilities and find creative potential. Artists lead.

The public seems to respond to rough edges and flaws, too. In the 60s, when researcher Joseph Weizenbaum attempted to parody a psychotherapist with crude language pattern matching in his program, ELIZA, he was surprised when users started to tell the program their darkest secrets and imagine understanding that wasn’t there. The crudeness of Markov chains as predictive text tool — they were developed for analyzing Pushkin statistics and not generating language, after all — has given rise to breeds of poetry based on their very weirdness. When Google’s style transfer technique was applied using a database of dog images, the bizarre, unnatural images that warped photos into dogs went viral online. Since then, Google has made vastly more sophisticated techniques that apply realistic painterly effects and… well, it seems that’s attracted only a fraction of the interest that the dog images did.

Maybe there’s something even more fundamental at work. Corporate culture dictates predictability and centralized value. The artist does just the opposite, capitalizing on surprise. It’s in the interest of artists if these technologies can be broken. Muzak represents what happens to aesthetics when centralized control and corporate values win out — but it’s as much the widespread public hatred that’s the major cautionary tale. The values of surprise and choice win out, not just as abstract concepts but also as real personal preferences.

We once feared that robotics would eliminate jobs; the very word is derived (by Czech writer Karel Čapek’s brother Joseph) from the word for slave. Yet in the end, robotic technology has extended human capability. It has brought us as far as space and taken us through Logo and its Turtle, even taught generations of kids math, geometry, logic, and creative thinking through code.

We seem to be at a similar fork in the road with machine learning. These tools can serve the interests of corporate control and passive consumption, optimised only for lazy consumption that extracts value from its human users. Or, we can abuse and misuse the tools, take them apart and put them back together again, apply them not in the sense that “everything looks like a nail” when all you have is a hammer, but as a precise set of techniques to solve specific problems. Muzak, in its final days, was nothing more than a pipe dream. What people wanted was music — and choice. Those choices won’t come automatically. We may well have to hack them.

PETER KIRN is an audiovisual artist, composer/musician, technologist, and journalist. He is the editor of CDM and co-creator of the open source MeeBlip hardware synthesizer (meeblip.com). For six consecutive years, he has directed the MusicMaker’s Hacklab at CTM Festival, most recently together with new media artist Ioann Maria.

http://ctm-festival.de/

The post Minds, machines, and centralization: AI and music appeared first on CDM Create Digital Music.

This plug-in is a secret weapon for sound design and drums

It’s full of gun sounds. But because of a combination of a unique sample architecture and engine and a whole lot of unique assets, the Weaponiser plug-in becomes a weapon of a different kind. It helps you make drum sounds.

Call me a devoted pacifist, call me a wimp – really, either way. Guns actually make me uncomfortable, at least in real life. Of course, we have an entirely separate industry of violent fantasy. And to a sound designer for games or soundtracks, Weaponiser’s benefits should be obvious and dazzling.

But I wanted to take a different angle, and imagine this plug-in as a sort of swords into plowshares project. And it’s not a stretch of the imagination. What better way to create impacts and transients than … well, fire off a whole bunch of artillery at stuff and record the result? With that in mind, I delved deep into Weaponiser. And as a sound instrument, it’s something special.

Like all advanced sound libraries these days, Weaponiser is both an enormous library of sounds, and a powerful bespoke sound engine in which those sounds reside. The Edinburgh-based developers undertook an enormous engineering effort here both to capture field recordings and to build their own engine.

It’s not even all about weapons here, despite the name. There are sound elements unrelated to weapons – there’s even an electronic drum kit. And the underlying architecture combines synthesis components and a multi-effects engine, so it’s not limited to playing back the weapon sounds.

What pulls Weaponiser together, then, is an approach to weapon sounds as a modularized set of components. The top set of tabs is divided into ONSET, BODY, THUMP, and TAIL – which turns out to be a compelling way to conceptualize hard-hitting percussion, generally. We often use vaguely gunshot-related metaphors when talking about percussive sounds, but here, literally, that opens up some possibilities. You “fire” a drum sound, or choose “burst” mode (think automatic and semi-automatic weapons) with an adjustable rate.

This sample-based section is then routed into a mixer with multi-effects capabilities.

In music production, we’ve grown accustomed to repetitive samples – a Roland TR clap or rimshot that sounds the same every single time. In foley or game sound design, of course, that’s generally a no-no; our ears quickly detect that something is amiss, since real-world sound never repeats that way. So the Krotos engine is replete with variability, multi-sampling, and synthesis. Applied to musical applications, those same characteristics produce a more organic, natural sound, even if the subject has become entirely artificial.

Weaponiser architecture

Let’s have a look at those components in turn.

Gun sounds. This is still, of course, the main attraction. Krotos have field recordings of a range of weapons:

AK 47
Berretta 92
Dragunov
GPMG
SPAS 12
CZ75
GPMG
H&K 416
M 16
M4 (supressed)
MAC 10
FN MINIMI
H&K MP5
Winchester 1887

For those of you who don’t know gun details, that amounts to pistol, rifle, automatic, semiautomatic, and submachine gun (SMG). These are divided up into samples by the onset/body/thump/tail architecture I’ve already described, plus there are lots of details based on shooting scenario. There are bursts and single fires, sniper shots from a distance, and the like. But maybe most interesting actually are all the sounds around guns – cocking and reloading vintage mechanical weapons, or the sound of bullets impacting bricks or concrete. (Bricks sound different than concrete, in fact.) There are bullets whizzing by.

And that’s just the real weapons. There’s an entire bank devoted to science fiction weapons, and these are entirely speculative. (Try shooting someone with a laser; it … doesn’t really work the way it does in the movies and TV.) Those presets get interesting, too, because they’re rooted in reality. There’s a Berretta fired interdimensionally, for example, and the laser shotguns, while they defy present physics and engineering, still have reloading variants.

In short, these Scottish sound designers spent a lot of time at the shooting range, and then a whole lot more time chained to their desk working with the sampler.

Things that aren’t gun sounds. I didn’t expect to find so many sounds in the non-gun variety, however. There are twenty dedicated kits, which tend in a sort of IDM / electro crossover, just building drum sounds on this engine. There are a couple of gems in there, too – enough so that I could imagine Krotos following up this package with a selection of drum production tools built on the Weaponiser engine but having nothing to do with bullets or artillery.

Until that happens, you can think of that as a teaser for what the engine can do if you spend time building your own presets. And to that end, you have some other tools:

Variations for each parameter randomize settings to avoid repetition.

Four engines, each polyphonic with their own sets of samples, combine. But the same things that allow you different triggering/burst modes for guns prove useful for percussion. And yes, there’s a “drunk” mode.

A deep multi-effects section with mixing and routing serves up still more options.

Four engines, synthesis. Onset, Body, Thump, and Tail each have associated synthesis engines. Onset and Body are specialized FM synthesizers. Thump is essentially a bass synth. Tail is a convolution reverb – but even that is a bit deeper than it may sound. Tail provides both audio playback and spatialization controls. It might use a recorded tail, or it might trigger an impulse response.

Also, the way samples are played here is polyphonic. Add more samples to a particular engine, and you will trigger different variants, not simply keep re-triggering the same sounds over and over again. That’s the norm for more advanced percussion samplers, but lately electronic drum engines have tended to dumb that down. And – there’s a built-in timeline with adjustable micro-timings, which is something I’ve never seen in a percussion synth/sampler.

The synth bits have their own parameters, as well, and FM and Amplitude Modulation modes. You can customize carriers and modulators. And you can dive into sample settings, including making radical changes to start and end points, envelope, and speed.

Effects and mixing. Those four polyphonic engines are mixed together in a four-part mix engine, with multi-effects that can be routed in various ways. Then you can apply EQ, Compression, Limiting, Saturation, Ring Modulation, Flanging, Transient Shaping, and Noise Gating.

Oh, you can also use this entire effects engine to process sounds from your DAW, making this a multi-effects engine as well as an instrument.

Is your head spinning yet?

About the sounds

Depending on which edition you grab, from the limited selection of the free 10-day demo up to the “fully loaded” edition, you’ll get as many as 2228 assets, with 1596 edited weapon recordings. There are also 692 “sweeteners” – a grab bag of still more sounds, from synths to a black leopard (the furry feilne, really), and the sound recordists messing around with their recording rig, keys, Earth, a bicycle belt… you get the idea. There are also various impulse responses for the convolution reverb engine, allowing you to place your sound in different rooms, stairwells, and synthetic reverbs.

The recording chain itself is worth a look. There are the expected mid/side and stereo recordings, classic Neumann and Sennheiser mics, and a whole lot of use by the Danish maker DPA – including mics positioned directly on the guns in some recordings. But they’ve also included recordings made with the Sennheiser Ambeo VR Mic for 360-degree, virtual reality sound.

They’ve shared some behind-the-scenes shots with CDM, and there’s a short video explaining the process.

In use, for music

Some of the presets are realistic enough that it did really make me uncomfortable at first working with these sounds in a music project – but that was sort of my aim. What I found compelling is, because of this synth engine, I was quickly able to transform those sounds into new, organic, even unrecognizable variations.

There are a number of strategies here that make this really interesting.

You can mess with samples. Adjusting speed and other parameters, as with any samples, of course gives you organic, complex new sounds.

There’s the synthesis engine. Working with the synth options either to reprocess the sounds or on their own allows you to treat Weaponiser basically as a drum synth.

The variations make this sound like acoustic percussion. With subtle or major variations, you can produce sound that’s less repetitive than electronic drums would be.

Mix and match. And, of course, you have presets to warp and combine, the ability to meld synthetic sounds and gun sounds, to sweeten conventional percussion with those additions (synths and guns and leopard sounds)… the mind reels.

Routing, of course is vital, too; here’s their look at that:

In fact, there’s so much, that I could almost go on a separate tangent just working with this musically. I may yet do that, but here is a teaser at what’s possible – starting with the obvious:

But I’m still getting lost in the potential here, reversing sounds, trying the drum kits, working with the synth and effects engines.

The plug-in can get heavy on CPU with all of that going on, obviously, but it’s also possible to render out layers or whole sounds, useful both in production and foley/sound design. Really, my main complaint is the tiny, complex UI, which can mean it takes some time to get the hang of working with everything. But as a sound tool, it’s pretty extraordinary. And you don’t need to have firing shotguns in all your productions – you can add some subtle sweetening, or additional layers and punch to percussion without anyone knowing they’re hearing the Krotos team messing with bike chains and bullets hitting bricks and an imaginary space laser.

Weaponiser runs on a Mac or PC, 64-bit only VST AU AAX. You’ll need about five and a half gigs of space free. Basic, which is already pretty vast, runs $399 / £259/ €337. Full loaded is over twice that size, and costs $599 / £379 / €494.

https://www.krotosaudio.com/weaponiser/

The post This plug-in is a secret weapon for sound design and drums appeared first on CDM Create Digital Music.

Behringer threatens legal action against a site that called it a copycat

Midifan, a top music portal and online magazine in China, has received notice from Behringer, threatening legal action over stories by Musicfan that called Behringer a “copycat.”

Midifan is a Chinese-language site, but evidently a significant one for that market. And Nan Tang, CEO and founder of the site, is also co-founder of 2nd Sense Audio, the software developer behind the WIGGLE synth and ReSample software. Nan, also known at musiXboy, contacted CDM with the news.

Nan has provided CDM with Midifan’s own English translation of the legal letter, as well as a statement in English. Translation is an important factor here, given we’re talking about libel, but Midifan’s English translations here for what they wrote are “shameless” and “copycat.”

Here’s the statement from Midifan:

Behringer sued Chinese media Midifan for saying them COPYCAT and shameless

Chinese portal website Midifan has received a lawyer’s letter from Behringer last week. Behringer claimed the fact that Midifan repeatedly reporting news about Behringer without any factual basis and using insulting words such as “copycat”, “shameless” has caused the reputation of the four clients (Uli Behringer, MUSIC Tribe Global Brands Ltd, Zhongshan Behringer Electronic Co., Ltd and Zhongshan Ouke Electronic Co., Ltd) to be seriously damaged.

The law firm worked for Behringer also claimed that they have reported to its local public security agency and plans to pursue legal responsibilities through criminal way.

A manufacturer taking legal action against music press for being critical or even calling it names is as far as I know fairly unprecedented. I’d almost call it shamel– actually, let’s just stick with “unprecedented.”

But it appears the letter is threatening criminal libel proceedings in China, not just civil charges. Criminal libel can carry more serious consequences; as reported in 2013 by The Guardian and Bloomberg, criminal libel in China can carry up to a three year prison sentence.

Ceci n’est pas une imitateur.
Behringer showed … uh… tributes to the Roland SH-101, , Roland VC-330, Roland TR-808, ARP Odyssey, and Sequential Prophet One in Berlin last month.

That said, in China as internationally, the law states that something is only libelous if it’s untrue. The “copycat” reference refers to Behringer gear shown at Superbooth, for instance, that literally was designed to look and sound like classic instruments (Roland TR-808, Sequential Circuits Prophet One, etc.). “Shameless” is a matter of opinion. Arguably, too, sending cease and desist letters to media outlets because they called you shameless and a copycat would presumably also not be a great way to demonstrate you possess shame.

Behringer Pro-One, 808, ARP Odyssey Clones At Superbooth 2018 [Synthtopia]

What might make Midifan different from other English-language sites that used similar language? It may be relevant that at the end of last year, Midifan reported on striking workers in a manufacturing facility Behringer owns, where labor complained about health issues. (That article has a number of photos, as well as English-language response from Behringer managers instructing workers to keep windows closed.)

For their part, Midifan have posted a response on their site (no English translation available):

https://www.midifan.com/modulenews-detailview-29955.htm

Midifan tell CDM that they have removed all references to the words “copycat” and “shameless” and replaced them with “more neutral words like “TRIBUTE and CLONE.”

Here’s the full letter from Behringer as translated by Midifan into English.

Lawyer’s Letter
In Relation to Urge You to Stop the Willful Infringement Behavior

Dear Sir or Madam,
Upon the entrustment of Zhongshan Behringer Electronic Co., Ltd (hereinafter referred to as Behringer Corporation), Zhongshan Ouke Electronic Co., Ltd (hereinafter referred to as Ouke Corporation), Uli Behringer and MUSIC Tribe Global Brands Ltd, Guangdong Baoxin Law Firm sends you the lawyer’s letter to your company on matters that urging you to stop the willful infringement behavior.

In accordance with the information and statements from four aforementioned clients, MUSIC Tribe Global Brands Ltd is the registered holder of the trademark “BEHRINGER”. On the basis of the authorization of MUSIC Tribe Global Brands Ltd, Ouke Corporation has the right to use the “BEHRINGER” trademark to engage in production and business activities within the scope of relevant authorizations. Behringer Corporation,whose English name also includes the word “Behringer”, is an affiliate enterprise of MUSIC Tribe Global Brands Ltd and Ouke Corporation.

Since 2017, you have continuously published articles such as “Exclusive breaking: Behringer’s recent crazy copycat stems from a trap of imitation chip more than a decade ago.” “, Can’t stop copycat: Behringer will make the Eurorack module next?” , “Shameless: Behinger exhibited copycat of TR-808, SH-101, Pro-One and Odyssey” on the website “https://www.midifan.com/”

and

Tencent WeChat public account “Midifan” without any factual basis, claiming that the above four principals have plagiarized the products of other companies. Beyond that, the fact that you repeatedly used insulting words such as “shameless”, “copycat” has caused the reputation of my four clients to be seriously damaged.

In view of this, Ouke Corporaiton has reported to its local public security agency and plans to pursue your legal responsibilities through criminal way. Meanwhile, the four principals entrusted me with this letter expressly:

Please be sure to remove all the insulting infringement articles four principals involved and other related information posted on the internet platform such as “https://www.midifan.com/” and Tencent WeChat public account “Midifan” , etc. within seven days of receipt of this letter, and issue a clarification announcement within the above-mentioned period to eliminate all adverse effects caused by the negative reputation of the four principals due to your inappropriate comments.

If you fail to perform the above obligations within the time limit, the four principals will continue pursuing your legal liabilities (including but not limited to
the criminal responsibility for defamation) through legal ways. All consequences and expenses resulting from this shall be borne by you.

In order to avoid inconvenience, please weigh the pros and cons and perform the above obligations in a timely manner!
Best regards.

CDM has reached out to Music Tribe / Behringer for comment via their public contact form, but has not yet received a response. Curiously, I found many of my colleagues don’t have direct, current media contacts with the company.

Oh, also – it seems Germany has criminal libel laws, too. So, naturally, let me then reiterate – what I saw at Superbooth were … meticulous recreations of famous electronic instruments of yore by a …. manufacturer of equipment that is … Behringer.

Now, please, I don’t want to go to German jail. Aber wenn ich ins Gefängnis gehe, wird sich mein Deutsch verbessern.

http://midifan.com/

The post Behringer threatens legal action against a site that called it a copycat appeared first on CDM Create Digital Music.

Output’s Arcade is a cloud-based loop library you play as an instrument

The future of soundware is clearly on-demand, crafted sounds from the cloud. Output adds a twist: don’t just give you new sounds, but give you a way to play them and make them your own.

So, the latest product from the LA-based sound boutique Output is called “Arcade” – so play, get it?

And it’s an early entry and fresh take on an area that’s set to heat up fast. To get things rolling here, your first 100 days are completely free; then you pay a monthly subscription rate of $10 a month (with cancellation whenever you want, and you don’t even lose access to your sounds).

As the number of producers grows, and the diversity of the music they may want to make seems to grow, too, as genres and niches spill over and transform at Internet speed, the need to deliver music makers sound and inspiration seems a major opportunity. We’re seeing subscription-based models (Native Instruments’ Sounds.com, Splice) and à la carte models (Loopmasters). And we’re seeing different ideas about how to organize releases (around genre, producer, sound house, or more curated selections around a theme), plus how to integrate tools for users.

Here’s where Arcade is interesting. It’s really a single, integrated instrument. And its goal is to find you exactly the sound you need, right away, easily — but also to give you the ability to completely transform that sound and make it your own, even loading your own found samples.

That’s important, because it bridges the divide between loops as a way of employing someone else’s content, and sound sampling as a DIY, personal affair, with a spectrum in between.

I suspect a lot of you reading have been all over that spectrum. Let’s consider even the serious, well-paid producer. You’ve got a tight scoring deadline, and the job needs a really particular sound, and you’re literally counting the minutes and sweating. Or… you’ve got a TV ad spot, and you need to make something sound completely original, and not like any particular preset you’ve heard before.

This also really isn’t about beginners or advanced users. An advanced user may have a very precise sound in mind – even to sit atop some meticulously hand-crafted sounds. And one of the first things a lot of beginners like to do is mess around with samples they record with a mic. (How many kids made noises into a Casio SK-1 in the 80s?)

I got to sit down with Output CEO and founder Gregg Lehrman, and we took a deep look at Arcade and had a long talk about what it’s about and what the future of music making might be. We’ll look more in detail at how you can use this as an instrument, but let’s cover what this is.

Walkthrough:

Choose your DAW – here’s Arcade running inside Ableton.

It’s a plug-in. This is important. You’ll always be interacting with Arcade as a plug-in, inside your host of choice – so no shuttling back and forth to a Website, as some solutions currently make you do. Omni-format – Mac AU / VST / VST3 / AAX, Windows VST / VST3 / AAX 32- and 64-bit. (Native Linux support would be nice, actually, but that’s missing for now.)

Sounds can match your tempo and key. You can hear sounds in their original form, or conform to the tempo and pitch that matches your current project. (Loopmasters does this too, actually, but via a separate app combined with a plug-in, which is a bit clunky.)

Browse through curated collections of sounds, which are paid for by subscription, Spotify/Netflix-style.

It lets you find sounds online. On-demand cloud browsing lets you check out selections of sounds, complete kits, and loops. You can preview all of these right away. Now, Netflix-style, Output promises new stuff every day, so you can browse around for something to inspire you if you’re feeling stuck. And at least in the test, these were organized with a sense of style and character – more like looking at the output of a music label than the usual commodity catalog of samples.

Search, browse, tagging, and the usual organizing tools are there, too – but it’s probably the preview and curation that puts this over the top.

— but it works if you’re offline, too. Prefer to switch the Internet off in your studio to avoid distractions? Work in an underground bunker, or in the hollowed out volcano island you use as an evil lair? Happily, you don’t need an Internet connection to work.

The keyboard (or whatever MIDI controller you’ve mapped) triggers loops, but also manipulates them on the fly. That lets you radically transform samples as you play – including your own sounds.

You can play the loops as an instrument. Okay, so the whole reason we went into music presumably is that we love the process of making music. Output excels here by letting you load loops into a 15-voice synth, then mangle and warp and modify the sound. It works really well from a keyboard or other MIDI controller.

This isn’t a sample playback instrument in the traditional sense, in terms of how it maps to pitch. Instead, white notes trigger samples, and black notes trigger modifiers. That’s actually really crazy to play around with, because it feels a little like you’re doing production work – the usual mouse-based chores of editing and modifying samples – as you play along, live.

There’s also input quantize, in case your keyboard skills aren’t so precise.

There are tons of modifiers and modulation and effects. Like all of Output’s products, the recipe is, build a deep architecture, then encapsulate it in an easy interface. That way, you can mess around with a simple control and make massive changes, which gets you discovering possibilities fast, but also go further into the structure if you want to get more specific about your needs, and if you’re willing to invest more time.

In this case, Arcade is built around four main sliders that control the character of the sound, both subtle and radical, and then another eleven effects and a deep mixing, routing, and modulation engine underneath.

So, let’s get into specifics.

Each Loop Voice – up to 15 of them – has a whole bunch of controls. It really would be fair to call this a synth:

• Multimode Filter with Resonance and Gradual/Steep Curve
• Volume, Pan, Attack/Release and Loop Crossfade
• Speed Control (1/4, 1/2, x1, x2)
• Tune Control (+/- 24)
• Loop Playback (Reverse, Pendulum, Loop On/Off, Hold)
• FX Sends Pre/Post x2
• Modifier Block
• Sync On/Off

Loop editing.

There’s also a time/pitch stretch engine with both “efficient” and resource-intensive “high quality” modes:

• BPM & Time signature Control
• Key Signature control
• Formant Pitch Control

Since the point is playing, you can map to velocity sensitivity, too, so how hard you hit keys impacts filter cutoff and resonance, voulme and formant.

But you have stuff that can do all the above. It’s the modifiers that get interesting – little macros that are accessible as you play:

• ReSequence (16 steps with Volume, Mute, Reverse, Speed, Length and Attack
Decay control per step)
• Playhead (Speed, Reverse, Loop On/Off, Cup Point per Loop)
• Repeater (Note Repeat with Rate, Reverse, Volume Step Sequencer)
Session Key Control

Plus there’s a Resequencer for sequencing sound slices into new combinations.

The Resequencer gives you even more options for manipulating audio content and turning it into something new.

– combined with modulation:

• LFO/Step (x2) Sync/Free mode with Retrigger and Rate.
• Waveshape Control
• Attack, Phase, Chaos and Polarity Control

Deep modulation options either power presets – or your own sound creations, if you’re ready to tinker.

And there’s a complete mixer:

• 15 Channel Mixer with Volume, Pan, Pre/Post Send FX(x2), Solo
• Send Bus (x2) with Volume, Pan and Mute
• 2 insert FX slots per Bus
• Master Bus with Volume, Pan, Mute and 4 Insert FX slots

The Mixer combines up to 16 parts.

Plus a whole mess of effects. Those effects helped define the character of earlier Output instruments, so it’s great to see here:

• Chorus
• Compressor
• MultiTap Delay
• Stereo Delay
• Distortion Box
• Equalizer
• Filter
• Limiter
• LoFi
• Phaser
• Reverb

It wouldn’t be an Output product without some serious multi-effects options.

But if Output likes to pitch itself as the “secret sauce” behind everything from Kendrick Lamar to the soundtracks for Black Panther and Stranger Things, I absolutely adore that you can load your own samples.

Native Instruments has built a great ecosystem around their tools, including Kontakt – and Output have made use of that engine. But it’s great to see this ground-up creation that introduces some different paradigms around want to do with sampled sound. That instant access to playing – to tapping into your muscle memory, your improvisation skills – I think could be really transformative. We’ve seen artists like Tim Exile advocate this approach in how he works, and it’s an element in a lot of great improvisers’ electronic work. What Output have done is allow you to combine sound discovery with instant playability.

The subscription model means you don’t have to reach for your credit card when you find sounds you want. But if you cancel the $10 a month subscription, unlike a Spotify or Netflix, you don’t lose access to your sounds. Output says:

If you open an older session, you will be prompted to log in, and you will not be able to click past the log in screen. You will be able to play back any MIDI or automation data recorded in your saved session. It will sound exactly the same, but you won’t be able to browse or tweak the character of the sound within the plugin. The midi can still be changed as the preset stays loaded in a session as long as you don’t uninstall Arcade which will remove all the audio samples. The best way to see what I mean is to test it yourself. Put ARCADE into a midi track, then log out of the plug-in. With the GUI still open albeit stuck on the log-in screen, play your track and hit some keys.

The fact that it’s all powered by subscription also means you’ll always have something there to use. But I do hope for the sake of sound creators – and because this engine is so cool – that Output also consider à la carte purchasing of some sounds selections. That could support more intensive sound design processes. And the interface looks like it’d work well as a shop, too. I share some of the concerns of sound artists that subscription models could hurt sound design in the way that they have music downloading. And — on the other hand, to use downloading as an example, a lot of us have both a subscription and buy a tone of stuff from Bandcamp, including underground music.

Let us know what you think.

I’ll be back with a guide to how to load your own sounds and play this as an instrument / design and modify sounds in a more sophisticated way.

More:

https://output.com/arcade

The post Output’s Arcade is a cloud-based loop library you play as an instrument appeared first on CDM Create Digital Music.

A look at AI’s strange and dystopian future for art, music, and society

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate

https://www.bell-labs.com/var/articles/discussion-sougwen-chung-about-human-robotic-collaborations/

Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:

OUR FRIENDS ELECTRIC

Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.

https://www.ctm-festival.de/news/

The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

Ableton’s Creative Extensions are a set of free tools for sound, inspiration

On the surface, Ableton’s new free download today is just a set of sound tools. But Ableton also seem focused on helping you find some inspiration to get ideas going.

Creative Extensions are now a free addition to Live 10. They’re built in Max for Live, so you’ll need either Ableton Live 10 Suite or a copy of Live 10 Standard and Max for Live. (Apparently some of you do fit the latter scenario.)

To find the tools, once you have those prerequisites, you’ll just launch the new Live 10 browser. then click Packs in the sidebar, and Creative Extensions will pop up under “Available Packs” as a download option. Like so:

I’m never without my trusty copy of Sax for Live. The rest I can download here.

Then once you’re there, you get a tool for experimenting with melodies, two virtual analog instruments (a Bass, and a polysynth with modulation and chorus), and effects (two delays, a limiter, an envelope processor, and a “spectral blur” reverb).

Have a look:

Melodic Steps is a note sequencer with lots of options for exploration.

Bass is a virtual analog monosynth, with four oscillators. (Interesting that this is the opposite approach taken by Native Instruments with the one-oscillator bass synth in Maschine.)

Poli is a virtual analog polysynth, basically staking out some more accessible ground versus the AAS-developed Analog already in Live.

Pitch Hack is a delay – here’s where things start to get interesting. You can transpose, reverse audio, randomize transposition interval, and fold the delayed signal back into the effect. If you’ve been waiting for a wild new delay from the company that launched with Grain Delay, this could be it.

Gated Delay is a second delay, combining a gate sequencer and delay. (Logic Pro 10.4 added some similar business via acquired developer Camel, but nice to have this in Live, too.)

Color Limited is modeled on hardware limiters.

Re-enveloper is a three-band, frequency dependent envelope processor. That gives you some more precise control of envelope on a sound – or you could theoretically use this in combination with other effects. Very useful stuff, so this could quietly turn out to be the tool out of this set you use the most.

Spectral Blur is perhaps the most interesting – it creates dense clouds of delays, which produce a unique reverb-style effect (but one distinct from other reverbs).

And the launch video:

All in all, it’s a nice addition to Ableton you can grab as a free update, and a welcome thank you to Live 10 adopters. I’m going to try some experimentation with the delays and re-enveloper, and I can already tell I’m going to be into this Spectral Blur. (Logic Pro’s ChromeVerb goes a similar direction, and I’m stupidly hooked on that, too.)

Creative Extensions: New in Live 10 Suite

If these feel a little pedestrian and vanilla to you – the world certainly does have a lot of traditional virtual analog – you might want to check out the other creations by this developer, Amazing Noises. They have something Granular Lab on the Max for Live side, plus a bunch of wonderful iOS effects. And you can always use an iPad or iPhone as an outboard effects processor for your Live set, too, taking advantage of the touch-centric controls. (Think Studiomux.)

https://www.ableton.com/en/packs/by/amazing-noises/

https://www.amazingnoises.com/

http://apps.amazingnoises.com/

If you’re a Max for Live user or developer and want to recommend one of your creations, too, please do!

Want some more quick inspiration / need to unstick your creative imagination today? Check out the Sonic Bloom Oblique Strategies. Here’s today’s:

And plenty more where that came from:

http://sonicbloom.net/en/category/oblique-strategies/

The post Ableton’s Creative Extensions are a set of free tools for sound, inspiration appeared first on CDM Create Digital Music.

Apple to open source, cross-platform GPU tech: drop dead?

Apple’s decision to shift to its own proprietary tech for accessing modern GPUs could hurt research, education, and pro applications on their platform.

OpenGL and OpenCL are the industry-standard specifications for writing code that runs on graphics architectures, for graphics and general-purpose computation, including everything from video and 3D to machine learning.

This is relevant to an ongoing interest on this site – those technologies also enable live visuals (including for music), creative coding, immersive audiovisual performance, and “AI”-powered machine learning experiments in music and art.

OpenGL and OpenCL, while sometimes arcane technologies, enable a wide range of advanced, cross-platform software. They’re also joined by a new industry standard, Vulkan. Cross-platform code is growing, not shrinking, as artists, researchers, creative professionals, experimental coders, and other communities contribute new generations of software that work more seamlessly across operating systems.

And Apple has just quietly blown off all those groups. From the announcement to developers regarding macOS 10.14:

Deprecation of OpenGL and OpenCL

Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.

They’re also deprecating OpenGL ES on iOS, with the same logic.

Metal is fine technology, but it’s specific to iOS and Mac OS. It’s not open, and it won’t run on other platforms.

Describing OpenGL and OpenCL as “legacy” is indeed fine. But as usual, the issue with Apple is an absence of information, and that’s what’s problematic. Questions:

Does this mean OpenGL apps will stop working? This is actually the big question. “Deprecation” in the case of QuickTime did eventually mean Apple pulled support. But we don’t know if it means that here.

(One interesting angle for this is, it could be a sign of more Apple-made graphics hardware. On the other hand, OpenGL implementations were clearly a time suck – and Apple often lagged major OpenGL releases.)

What about support for Vulkan? Apple are a partner in the Khronos Group, which develops this industry-wide standard. It isn’t in fact “legacy,” and it’s designed to solve the same problems as Metal does. Is Metal being chosen over Vulkan?

Cook’s 2018 Apple seems to be far more interested in showcasing proprietary developer APIs. Compare the early Jobs era, which emphasized cross-platform standards (OpenGL included). Apple has an opportunity to put some weight behind Vulkan – if not at WWDC, fair enough, but at some other venue?

What happens on the Web? Cross-platform here is even more essential, since your 3D or machine learning code for a browser needs to work in multiple scenarios.

Transparency and information might well solve this, but for now we’re a bit short on both.

Metal support in Unity. Frameworks like Unity may be able to smooth out platform differences for developers (including artists).

A case for Apple pushing Metal

First off, there is some sense to Apple’s move here. Metal – like DirectX on Windows or Mantle from AMD – is a lower-level language for addressing the graphics hardware. That means less overhead, higher performance, and extra features. It suggests Apple is pushing their mobile platforms in particular as an option for higher-end games. We’ve seen gaming companies Razer and Asus create Android phones that have high-end specs on paper, but without a low-level API for graphics hardware or a significant installed base, those are more proof of concept than they are useful as game platform.

And Apple does love to deprecate APIs to force developers onto the newest stuff. That’s why so often your older OS versions are so quickly unsupported, even when developers don’t want to abandon you.

On mobile, Apple never implemented OpenCL in the first place. And there’s arguably a more significant gap between OpenGL ES and something like Metal for performance.

Another business case: Apple may be trying to drive a wedge in development between iOS and Android, to ensure more iOS-only games and the like. Since they can’t make platform exclusives the way something like a PlayStation or Nintendo Switch or Xbox can, this is one way to do it.

And it seems Apple is moving away from third-party hardware vendors, meaning they control both the spec here and the chips inside their devices.

But that doesn’t automatically make any of this more useful to end users and developers, who reap benefits from cross-platform support. It significantly increases the workload on Apple to develop APIs and graphics hardware – and to encourage enough development to keep up with competing ecosystems. So there’s a reason for standards to exist.

Vulkan offers some of the low-level advantages of Metal (or DirectX) … but it works cross-platform, even including Web contexts.

Pulling out of an industry standard group

The significant factor here about OpenGL generally is, it’s not software. It’s a specification for an API. And for the moment, it remains the industry standard specification for interfacing with the GPU. Unlike their move to embrace new variations of USB and Thunderbolt over the years, or indeed the company’s own efforts in the past to advance OpenGL, Apple isn’t proposing an alternative standard. They’re just pulling out of a standard the entire industry supports, without any replacement.

And this impacts a range of cross-platform software, open source software, and the ability to share code and research across operating systems, including but not limited to:

Video editing
Post production
Generative graphics
Digital art
VJing and live visual software
Creative coding
Machine learning and neural network tools

Cross platform portability for those use cases meets a significant set of needs. Educators wanting to teach how to write shaders now face having students with Apple hardware having to use a different language, for example. Gamers wanting access to the largest possible library – as on services like Steam – will now likely see more platform-exclusive titles instead on the Apple hardware. And pros wanting access to specific open source, high-end video tools… well, here’s yet another reason to switch to Windows or Linux.

This doesn’t so much impact developers who rely on existing libraries that target Metal specifically. So, for instance, developing in the Unity Game Engine means your creation can use Metal on Apple platforms and OpenGL elsewhere. But because of the size of the ecosystem here, that won’t be the case for a lot of other use cases.

And yeah, I’m serious about Linux as a player here. As Microsoft and Apple continue to emphasize consumers over pros, cramming huge updates over networks and trying to foist them on users, desktop Linux has quietly gotten a lot more stable. For pro video production, post production, 3D, rendering, machine learning, research, and – even a growing niche of people working in audio and music – Linux can simply out-perform its proprietary relatives and save money and time.

So what happened to Vulkan?

Apple could have joined with the rest of the industry in supporting a new low-level API for computation and graphics. That standard is now doubly important as machine learning technology drives new ideas across art, technology, and society.

https://www.khronos.org/vulkan/

And apart from the value of it being a standard, Apple would break with important hardware partners here at their own peril. Yes, Apple makes a lot of their own hardware under the hood – but not all of it. Will they also make a move to proprietary graphics chips on the Mac, and will those keep up with PC offerings? (There is currently a Vulkan SDK for Mac. It’s unclear exactly how it will evolve in the wake of this decision.)

ExtremeTech have a scathing review of the sitution. it’s a must-read, as it clearly breaks down the different pipelines and specs and how they work. But it also points out, Apple have tended to lag not just in hardware adoption but in their in-house support efforts. That suggests you get an advantage from being on Windows or Linux, generally:

Apple brings its Metal API to OS X 10.11, kicks Vulkan to the curb

Updated: Yes, of course you can run Molten, the latest OpenGL tech, atop Metal. In fact, here’s a demo from 2016. (Thanks, George Toledo!)

https://moltengl.com/moltenvk/

https://github.com/KhronosGroup/MoltenVK

That’s little comfort for larger range backwards compatibility with “legacy” OpenGL, but it does bode reasonably well for the future. And, you know … fish tornadoes.

Side note: that’s not just any fish tornado. The credit is to Robert Hodgin, the creative coding artists aka flight404 responsible for many, many generative demos over the years – including a classic iTunes visualizer.)

Fragmentation or standards

Let’s be clear – even with OpenGL and OpenCL, there’s loads of fragmentation in the fields I mention, from hardware to firmware to drivers to SDKs. Making stuff work everywhere is messy.

But users, researchers, and developers do reap real benefits from cross-platform standards and development. And Metal alone clearly doesn’t provide that.

Here’s my hope: I hope that while deprecating OpenGL/CL, Apple does invest in Vulkan and its existing membership in Khronos Group (the industry consortium that supports that API as well as OpenGL). Apple following up this announcement with some news on Vulkan and cross-platform support – and how the transition to that and Metal would work – could turn the mood around entirely.

Apple’s reputation may be proprietary, but this is also the company that pushed USB and Thunderbolt, POSIX and WebKit, that used a browser to sell its first phone, and that was a leading advocate (ironically) for OpenGL and OpenCL.

As game directors and artists and scientists and thinkers all explore the possibilities of new graphics hardware, from virtual reality to artificial intelligence, we have some real potential ahead. The platforms that will win I think will be the ones that maximize capabilities and minimize duplication of effort.

And today, at least, Apple are leaving a lot of those users in the dark about just how that future will work.

I’d love your feedback. I’m ranting here partly because I know a lot of the most interesting folks working on this are readers, so do please get in touch. You know more than I do, and I appreciate your insights.

More:

https://developer.apple.com/macos/whats-new/

https://www.khronos.org/opengl/wiki/FAQ

https://www.khronos.org/vulkan/

https://developer.apple.com/documentation/metalperformanceshaders

… and what this headline is referencing

The post Apple to open source, cross-platform GPU tech: drop dead? appeared first on CDM Create Digital Music.