Visualize pitch like John Coltrane with this mystical image

Some musicians see Islamic mysticism; some the metaphysics of Einstein. But whether spiritual, theoretical, or both, even one John Coltrane pitch wheel is full of musical inspiration.

One thing’s certain – if you want your approach to pitch to be as high-tech and experimental as your instruments, Coltrane’s sketchbook could easily keep you busy for a lifetime.

Unpacking the entire music-theoretical achievements of John Coltrane could fill tomes – even this one picture has inspired a wide range of different interpretations. But let’s boil it down just to have a place to start. At its core, the Coltrane diagram is a circle of fifths – a way of representing the twelve tones of equal temperament in a continuous circle, commonly used in Western music theory (jazz, popular, and classical alike). And any jazz player has some basic grasp of this and uses it in everything from soloing to practice scales and progressions.

What makes Coltrane’s version interesting is the additional layers of annotation – both for what’s immediately revealing, and what’s potentially mysterious.

Sax player and blogger Roel Hollander pulled together a friendly analysis of what’s going on here. And while he himself is quick to point out he’s not an expert Coltrane scholar, he’s one a nice job of compiling some different interpretations.

JOHN COLTRANE’S TONE CIRCLE

See also Corey Mwamba’s analysis, upon which a lot of that story draws

Open Culture has commented a bit on the relations to metaphysics and the interpretation of various musicians, including vitally Yusef Lateef’s take:

John Coltrane Draws a Picture Illustrating the Mathematics of Music

Plus if you like this sort of thing, you owe it to yourself to find a copy of Yusef Lateef’s Repository of Scales and Melodic Patterns [Peter Spitzer blog review] – that’s related to what you (might) see here.

Take it with some grains of salt, since there doesn’t seem to be a clear story to why Coltrane even drew this, but there are some compelling details to this picture. The two-ring arrangement gives you two whole tone scales – one on C, and one on B – in such a way that you get intervals of fourths and fifths if you move diagonally between them.

Scanned image of the mystical Coltrane tone doodle.

Corey Mwamba simplified diagram.

That’s already a useful way of visualizing relations of keys in a whole tone arrangement, which could have various applications for soloing or harmonies. Where this gets more esoteric is the circled bits, which highlight some particular chromaticism – connected further by a pentagram highlighting common tones.

Even reading that crudely, this can be a way of imagining diminished/double diminished melodic possibilities. Maybe the most suggestive take, though, is deriving North Indian-style modes from the circled pitches. Whether that was Coltrane’s intention or not, this isn’t a bad way of seeing those modal relationships.

You can also see some tritone substitutions and plenty of chromaticism and the all-interval tetrachord if you like. Really, what makes this fun is that like any such visualization, you can warp it to whatever you find useful – despite all the references to the nature of the universe, the essence of music is that you’re really free to make these decisions as the mood strikes you.

I’m not sure this will help you listen to Giant Steps or A Love Supreme with any new ears, but I am sure there are some ideas about music visualization or circular pitch layouts to try out. (Yeah, I might have to go sketch that on an iPad this week.)

(Can’t find a credit for this music video, but it’s an official one – more loosely interpretive and aesthetic than functional, released for Untitled Original 11383. Maybe someone knows more… UMG’s Verve imprint put out the previously unreleased Both Directions At Once: The Lost Album last year.)

How might you extend what’s essentially a (very pretty) theory doodle to connecting Coltrane to General Relativity? Maybe it’s fairer to say that Coltrane’s approach to mentally freeing himself to find the inner meaning of the cosmos is connected, spiritually and creatively. Sax player and astrophysicist professor (nice combo) Stephon Alexander makes that cultural connection. I think it could be a template for imagining connections between music culture and physics, math, and cosmology.

Images (CC-BY-ND) Roel’s World / Roel Hollander

and get ready to get lost there:

https://roelhollander.eu/

The post Visualize pitch like John Coltrane with this mystical image appeared first on CDM Create Digital Music.

Azure Kinect promises new motion, tracking for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking in the dark
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:


All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

aka.ms/kinectdocs

The post Azure Kinect promises new motion, tracking for art appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

What could make APC Live, MPC cool: Akai’s new software direction

Akai tipped their hand late last year that they were moving more toward live performance. With APC Live hardware leaked and in the wild, maybe it’s time to take another look. MPC software improvements might interest you with or without new hardware.

MPC 2.3 software dropped mid-November. We missed talking about it at the time. But now that we’re (reasonably certain, unofficially) that Akai is releasing new hardware, it puts this update in a new light. Background on that:

APC as standalone hardware? Leaked Akai APC Live

Whether or not the leaked APC Live hardware appeals to you, Akai are clearly moving their software in some new directions – which is relevant whatever hardware you choose. We don’t yet know if the MPC Live hardware will get access to the APC Live’s Matrix Mode, but it seems a reasonable bet some if not all of the APC Live features are bound for MPC Live, too.

And MPC 2.3 added major new live performance features, as well as significant internal synths, to that standalone package. Having that built in means you get it even without a computer.

New in 2.3:

Three synths:

  • A vintage-style, modeled analog polysynth
  • A bass synth
  • A tweakable, physically modeled electric piano

Tubesynth – an analog poly.

Electric’s physically-modeled keys.

Electric inside the MPC Live environment.

As with NI’s Maschine, each of those can be played from chords and scales with the pads mode. But Maschine requires a laptop, of course – MPC Live doesn’t.

A new arpeggiator, with four modes of operation, ranging from traditional vintage-style arp to more modern, advanced pattern playback

And there’s an “auto-sampler.”

That auto-sampler looks even more relevant when you see the APC Live. On MPC Live (and by extension APC Live), you can sample external synths, sample VST plug-ins, and even capture outboard CV patches.

Of course, this is a big deal for live performance. Plug-ins won’t work in standalone mode – and can be CPU hogs, anyway – so you can conveniently capture what you’re doing. Got some big, valuable vintage gear or a modular setup you don’t to take to the gig? Same deal. And then this box gives you the thing modular instruments don’t do terribly well – saving and recalling settings – since you can record and restore those via the control voltage I/O (also found on that new APC Live). The auto-sampler is an all-in-one solution for making your performances more portable.

Full details of the 2.3 update – though I expect we’ve got even more new stuff around the corner:

http://www.akaipro.com/pages/mpc-2.3-desktop-software-and-firmware-update

With or without the APC Live, you get the picture. While Ableton and Native Instruments focus on studio production and leave you dependent on the computer, Akai’s angle is creating an integrated package you can play live with – like, onstage.

Sure enough, Akai have been picking up large acts to their MPC Live solution, too – John Mayer, Metallica, and Chvrches all got named dropped. Of those, let’s check out Chvrches – 18 minutes in, the MPC Live gets showcased nicely:

It makes sense Akai would come to rely on its own software. When Akai and Novation released their first controllers for Ableton Live, Ableton had no hardware of their own, which changed with Push. But of course even the first APC invoked the legendary MPC legacy – and Akai has for years been working on bringing desktop software functionality to the MPC name. So, while some of us (me included) first suspected a standalone APC Live might mean a collaboration with Ableton, it does make more sense that it’s a fully independent Akai-made, MPC-style tool.

It also makes sense that this means, for now, more internal functionality. (The manual reference to “plugins” in the APC Live manual that leaked probably means those internal instruments and effects.) That has more predictability as far as resource consumption, and means avoiding the licensing issues necessary and the like to run plug-ins in embedded Linux. This could change, by the way – Propellerhead’s Rack Extensions format now is easily portable to ARM processors, for example – but that’s another story. As far as VST, AU, and AAX, portability to embedded hardware is still problematic.

The upshot of this, though, is that InMusic at least has a strategy for hardware that functions on its own – not just as a couple of one-off MPC pieces, but in terms of integrated hardware/software development across a full product line. Native Instruments, Ableton, and others might be working on something like that that lets you untether from the computer, but InMusic is shipping now, and they aren’t.

Now the question is whether InMusic can capitalize on its MPC legacy and the affection for the MPC and APC brands and workflows – and get people to switch from other solutions.

The post What could make APC Live, MPC cool: Akai’s new software direction appeared first on CDM Create Digital Music.

Streaming music is coming to DJ software, but one step at a time

Streaming is coming to DJing. Last week saw new announcements from Tidal, SoundCloud, Serato, and several other software makers. But progress is uneven – expect these features at first to be primarily about discovery, not what you do at a gig.

The news this week:

SoundCloud announced coming support in Traktor, Serato, Virtual DJ, Mixvibes, and more:
Just announced: Soon you can access SoundCloud’s catalog of music directly through your DJ software [SoundCloud blog]

Serato announced support for SoundCloud Go+ and TIDAL premium and HiFi subscriptions in forthcoming DJ Lite and DJ Pro releases. They didn’t post even a news item, beyond sending a press release, but TIDAL added this minisite:

http://tidal.com/serato

The markets

First, before talking about the technology and the deals here, we need to first talk about what “DJ” means. Across that spectrum, we can talk about three really different poles, as far as use cases:

Wedding DJs (read: people taking requests). This is the big one. You can tell, because when streaming site Pulselocker shut down, there were screams from people who were playing wedding gigs and suddenly lost access to their music. This isn’t just about a technological shift, either. As American music markets have fragmented and mainstream pop music has lost its hegemony – and as DJing and music consumption have become more global – the amount of music people might request has grown, too.

Whatever you think of wedding DJs, you can imagine weddings as a place where global cultural and technological changes are radical and inseparable. And that’s good, because I don’t know about you, but if I have to hear “At Last” one more time, I may try to drown myself in a punch bowl.

If you have to take requests, access to all music becomes a need, not a luxury.

DJs playing hits. There’s also a club DJ crowd looking for big hits, too, which tends to overlap in some ways with the wedding DJs – they’re going for popularity over digging deep in a particular genre. That means that certain big hits that a particular streaming site has (cough, Tidal) become relevant to both these groups. (I was recently schooled on the importance

Underground DJs. More at the CDM end of the pond, you’ve got DJs who are trying to discover new music. Tidal might not be so relevant here, but SoundCloud sure is.

If you routinely tab back and forth between SoundCloud and your DJ app, integrating the two might have appeal – even for underground digital diggers.

The question of what DJs in each of these groups would want to do with streaming also varies. There’s discovery – some people are looking to play tracks on their digital DJ decks without first downloading, or for integration of streaming sites. There’s playing in actual gigs, with a live Internet connection. Then there’s playing gigs where you don’t have an Internet connection – more often the norm – where you might want tracks from a streaming collection to be synced or cached to storage.

How the DJ streaming landscape just shifted

Amsterdam Dance Event last week tends to center on the business of electronic dance music, so it was a stage for some of the players to crow about new achievements – even making some of those announcements before the solution is fully available.

In particular, DJ software maker Serato and streaming site SoundCloud were vocal about their coming solutions.

Some takeaways:

These solutions are online only. Let’s start with the big disclaimer. Downloads are here to stay for now, because these services work only when online, and standalone decks are left out.

Streaming tracks are fully integrated – I’ve confirmed that at least with Serato, who say when you’re connected, the tracks cache and perform just like locally stored tracks. But that’s when you have an Internet connection.

Pulselocker, the service specifically focused around this idea, had offered the ability to store tracks locally. None of these integrations offers offline access, at least initially. I’ve been told by Serato that if you lose an Internet connection mid-track, you can at least continue playing that track; you just lose access to other streaming content.

Wedding DJs or some clubs where you can rely on an Internet connection I expect will take advantage of streaming functionality right away, for DJs who take requests. For DJs who prepare music in advance, though, it’s probably a deal killer.

(Pulselocker was acquired by Beatport earlier this year, a sign that the big players were making their moves.)

Once upon a time, there was Pulselocker. But the service was acquired by Beatport, and nothing yet offers offline functionality as it did. (Blame licensing?)

SoundCloud and Serato are looking to get ahead of the curve – while we wait on Beatport and Pioneer. SoundCloud is partnering with all the major software vendors. (Only Algoriddim, whose djay product line for desktop and mobile is already integrated with Spotify, was missing.)

And Serato are leading the way with Tidal and SoundCloud integration, replacing their existing Pulselocker functionality.

Timeframe for both: “coming months.”

There’s reason to pre-announce something here, though, which is to try to steal some thunder from some market leaders. Beatport and Pioneer are of course dominant players here. We know both are readying solutions – Beatport making use of that aforementioned Pulselocker acquisition, presumably. We just don’t know when those solutions will become available; Pioneer CDJ hardware in particular is likely fairly far into the future.

Just don’t underestimate the Serato/Tidal combo, or even Serato/SoundCloud. Those are big partnerships for the US market and genres like hip hop, both of which are big and growing.

DJ compatibility is a way to sell you subscriptions. Yes, artists and labels get paid, but there’s another factor here – DJing is becoming so widespread that it’s a way to upsell music subscriptions. DJing really is music consumption now.

Use Traktor, Serato, Virtual DJ, Mixvibes, and others? SoundCloud hopes you’ll buy a top-tier SoundCloud Go+ subscription.

Using Serato, and want to play some top hits in high quality? Tidal can offer Premium (AAC) or HiFi (including lossless FLAC and ALAC streaming) tiers.

In case you doubt that, both services will work with full integration using just a 30-day trial.

SoundCloud still lags in quality. Just as on the site, SoundCloud for now is limited to 128kbps at launch, as reported by DJ Tech Tools.

Yes, streaming DJs could represent a new revenue source. This is one potential bright spot here on the creator side. Assuming you can reach DJs who might not have purchased downloads on Bandcamp, Beatport, and the like, the streaming sites will divvy up those subscription fees and calculate revenue sharing for track plays by DJs.

What does all this mean?

It’s easy to assume this is all meaningless. Serious DJs playing big club and festival gigs – or even underground DJs playing with dodgy Internet connections and meticulously organized USB thumb drives of USB – you’re obviously not going anywhere near this when you play.

And those DJs taking requests at weddings and playing the latest dancefloor megahits, well, that’s relevant to you only if you’re producing those kinds of hits.

But there remains some potential here, even with these launch offerings, whenever they do materialize.

For all but the most specific boutique labels and artists, I think most music creators are trying to maximize exposure and squeeze revenue wherever they can. A whole lot of those labels do put up their music through distribution, meaning you can download directly on Bandcamp, for instance, but you can also stream catalogs on Spotify and iTunes. (Anyone who’s doing digital distribution has likely seen long lists of weird streaming and download sites you’ve never even heard of, but where your music gets dumped and … eventually ripped and put up on pirate music sites, too.)

If this gets more people on premium subscriptions, there’s hope. It’s better than people listening to your music on YouTube while you get paid next to nothing.

The real question here is how streaming integration looks. If discovering new music is really what this is about – at least until fast Internet becomes more ubiquitous – then the integrations need to actually make it easy to find music. That shouldn’t just be about some automated recommendation algorithm; it will require a whole new approach to DJ software and music tools. Or at the very least, these tools should make you want to sit at your DJ rig with some friends, punch up some new artist names and find tracks. They should be as appealing as going to a record store, thumbing through records, and putting them on turntables – in a virtual sense, anyway.

And what about ownership? I think it’s important for DJs to be able to differentiate between always-on access to all music everywhere, and their own music collection, even if the collection itself is virtual.

Why not put SoundCloud streaming in your DJ app, but offer one-click buying to add downloads?

Or why not use the cloud as a way to sync music you’ve already bought, rather than make it exclusively an overwhelming supply of music you don’t want, which you lose when you lose Internet access?

At the very least, labels who are already squeezed as it is are unlikely to savor the thought of losing download revenue in exchange for hard-to-track, hard-to-predict subscriptions. $10 a month or so seems utterly unsustainable. A lot of labels already barely break even when they pay for even basic PR and mastering services. Imagine the nightmare of having to invest more just to be found on streaming services, while earning less as flat fee subscriptions are divvied up.

There’s an idea here, but it’s far from being ready. For now, it seems like the best strategy is to keep your catalogs up to date across services, keep building close relationships with fans, and … wait and see. In a few months we should see more of what these offerings look like in practice, and it seems likely, too, we’ll know more about where Pioneer, Beatport, and others plan to go next, too.

The post Streaming music is coming to DJ software, but one step at a time appeared first on CDM Create Digital Music.

Streaming music is coming to DJ software, but one step at a time

Streaming is coming to DJing. Last week saw new announcements from Tidal, SoundCloud, Serato, and several other software makers. But progress is uneven – expect these features at first to be primarily about discovery, not what you do at a gig.

The news this week:

SoundCloud announced coming support in Traktor, Serato, Virtual DJ, Mixvibes, and more:
Just announced: Soon you can access SoundCloud’s catalog of music directly through your DJ software [SoundCloud blog]

Serato announced support for SoundCloud Go+ and TIDAL premium and HiFi subscriptions in forthcoming DJ Lite and DJ Pro releases. They didn’t post even a news item, beyond sending a press release, but TIDAL added this minisite:

http://tidal.com/serato

The markets

First, before talking about the technology and the deals here, we need to first talk about what “DJ” means. Across that spectrum, we can talk about three really different poles, as far as use cases:

Wedding DJs (read: people taking requests). This is the big one. You can tell, because when streaming site Pulselocker shut down, there were screams from people who were playing wedding gigs and suddenly lost access to their music. This isn’t just about a technological shift, either. As American music markets have fragmented and mainstream pop music has lost its hegemony – and as DJing and music consumption have become more global – the amount of music people might request has grown, too.

Whatever you think of wedding DJs, you can imagine weddings as a place where global cultural and technological changes are radical and inseparable. And that’s good, because I don’t know about you, but if I have to hear “At Last” one more time, I may try to drown myself in a punch bowl.

If you have to take requests, access to all music becomes a need, not a luxury.

DJs playing hits. There’s also a club DJ crowd looking for big hits, too, which tends to overlap in some ways with the wedding DJs – they’re going for popularity over digging deep in a particular genre. That means that certain big hits that a particular streaming site has (cough, Tidal) become relevant to both these groups. (I was recently schooled on the importance

Underground DJs. More at the CDM end of the pond, you’ve got DJs who are trying to discover new music. Tidal might not be so relevant here, but SoundCloud sure is.

If you routinely tab back and forth between SoundCloud and your DJ app, integrating the two might have appeal – even for underground digital diggers.

The question of what DJs in each of these groups would want to do with streaming also varies. There’s discovery – some people are looking to play tracks on their digital DJ decks without first downloading, or for integration of streaming sites. There’s playing in actual gigs, with a live Internet connection. Then there’s playing gigs where you don’t have an Internet connection – more often the norm – where you might want tracks from a streaming collection to be synced or cached to storage.

How the DJ streaming landscape just shifted

Amsterdam Dance Event last week tends to center on the business of electronic dance music, so it was a stage for some of the players to crow about new achievements – even making some of those announcements before the solution is fully available.

In particular, DJ software maker Serato and streaming site SoundCloud were vocal about their coming solutions.

Some takeaways:

These solutions are online only. Let’s start with the big disclaimer. Downloads are here to stay for now, because these services work only when online, and standalone decks are left out.

Streaming tracks are fully integrated – I’ve confirmed that at least with Serato, who say when you’re connected, the tracks cache and perform just like locally stored tracks. But that’s when you have an Internet connection.

Pulselocker, the service specifically focused around this idea, had offered the ability to store tracks locally. None of these integrations offers offline access, at least initially. I’ve been told by Serato that if you lose an Internet connection mid-track, you can at least continue playing that track; you just lose access to other streaming content.

Wedding DJs or some clubs where you can rely on an Internet connection I expect will take advantage of streaming functionality right away, for DJs who take requests. For DJs who prepare music in advance, though, it’s probably a deal killer.

(Pulselocker was acquired by Beatport earlier this year, a sign that the big players were making their moves.)

Once upon a time, there was Pulselocker. But the service was acquired by Beatport, and nothing yet offers offline functionality as it did. (Blame licensing?)

SoundCloud and Serato are looking to get ahead of the curve – while we wait on Beatport and Pioneer. SoundCloud is partnering with all the major software vendors. (Only Algoriddim, whose djay product line for desktop and mobile is already integrated with Spotify, was missing.)

And Serato are leading the way with Tidal and SoundCloud integration, replacing their existing Pulselocker functionality.

Timeframe for both: “coming months.”

There’s reason to pre-announce something here, though, which is to try to steal some thunder from some market leaders. Beatport and Pioneer are of course dominant players here. We know both are readying solutions – Beatport making use of that aforementioned Pulselocker acquisition, presumably. We just don’t know when those solutions will become available; Pioneer CDJ hardware in particular is likely fairly far into the future.

Just don’t underestimate the Serato/Tidal combo, or even Serato/SoundCloud. Those are big partnerships for the US market and genres like hip hop, both of which are big and growing.

DJ compatibility is a way to sell you subscriptions. Yes, artists and labels get paid, but there’s another factor here – DJing is becoming so widespread that it’s a way to upsell music subscriptions. DJing really is music consumption now.

Use Traktor, Serato, Virtual DJ, Mixvibes, and others? SoundCloud hopes you’ll buy a top-tier SoundCloud Go+ subscription.

Using Serato, and want to play some top hits in high quality? Tidal can offer Premium (AAC) or HiFi (including lossless FLAC and ALAC streaming) tiers.

In case you doubt that, both services will work with full integration using just a 30-day trial.

SoundCloud still lags in quality. Just as on the site, SoundCloud for now is limited to 128kbps at launch, as reported by DJ Tech Tools.

Yes, streaming DJs could represent a new revenue source. This is one potential bright spot here on the creator side. Assuming you can reach DJs who might not have purchased downloads on Bandcamp, Beatport, and the like, the streaming sites will divvy up those subscription fees and calculate revenue sharing for track plays by DJs.

What does all this mean?

It’s easy to assume this is all meaningless. Serious DJs playing big club and festival gigs – or even underground DJs playing with dodgy Internet connections and meticulously organized USB thumb drives of USB – you’re obviously not going anywhere near this when you play.

And those DJs taking requests at weddings and playing the latest dancefloor megahits, well, that’s relevant to you only if you’re producing those kinds of hits.

But there remains some potential here, even with these launch offerings, whenever they do materialize.

For all but the most specific boutique labels and artists, I think most music creators are trying to maximize exposure and squeeze revenue wherever they can. A whole lot of those labels do put up their music through distribution, meaning you can download directly on Bandcamp, for instance, but you can also stream catalogs on Spotify and iTunes. (Anyone who’s doing digital distribution has likely seen long lists of weird streaming and download sites you’ve never even heard of, but where your music gets dumped and … eventually ripped and put up on pirate music sites, too.)

If this gets more people on premium subscriptions, there’s hope. It’s better than people listening to your music on YouTube while you get paid next to nothing.

The real question here is how streaming integration looks. If discovering new music is really what this is about – at least until fast Internet becomes more ubiquitous – then the integrations need to actually make it easy to find music. That shouldn’t just be about some automated recommendation algorithm; it will require a whole new approach to DJ software and music tools. Or at the very least, these tools should make you want to sit at your DJ rig with some friends, punch up some new artist names and find tracks. They should be as appealing as going to a record store, thumbing through records, and putting them on turntables – in a virtual sense, anyway.

And what about ownership? I think it’s important for DJs to be able to differentiate between always-on access to all music everywhere, and their own music collection, even if the collection itself is virtual.

Why not put SoundCloud streaming in your DJ app, but offer one-click buying to add downloads?

Or why not use the cloud as a way to sync music you’ve already bought, rather than make it exclusively an overwhelming supply of music you don’t want, which you lose when you lose Internet access?

At the very least, labels who are already squeezed as it is are unlikely to savor the thought of losing download revenue in exchange for hard-to-track, hard-to-predict subscriptions. $10 a month or so seems utterly unsustainable. A lot of labels already barely break even when they pay for even basic PR and mastering services. Imagine the nightmare of having to invest more just to be found on streaming services, while earning less as flat fee subscriptions are divvied up.

There’s an idea here, but it’s far from being ready. For now, it seems like the best strategy is to keep your catalogs up to date across services, keep building close relationships with fans, and … wait and see. In a few months we should see more of what these offerings look like in practice, and it seems likely, too, we’ll know more about where Pioneer, Beatport, and others plan to go next, too.

The post Streaming music is coming to DJ software, but one step at a time appeared first on CDM Create Digital Music.

Upload music directly to Spotify: streaming giant goes in new direction

Spotify has begun opening uploading not just to labels and distributors, but individual artists. And the implications of that could be massive, if the service is expanded – or if rivals follow suit.

On reflection, it’s surprising this didn’t happen sooner.

Among major streaming players, currently only SoundCloud lets individual artists upload music directly. Everyone else requires intermediaries, whether that’s labels or distributors. The absurdity of this system is that services like TuneCore have profited off streaming growth. In theory, that might have meant that music selections were more “curated” and less junk showed up online. In reality, though, massive amounts of music get dumped on all the streaming services, funneling money from artists and labels into the coffers of third-party services. That arrangement surely makes no sense for the likes of Spotify, Apple, Google, and others as they look to maximize revenue.

Music Business Worldwide reports that Spotify is starting to change that now:
Spotify opens the floodgates: artists can now upload tracks direct to the streaming platform for FREE

See also TechCrunch:

Spotify will now let indie artists upload their own music

What we know so far…

You’ll upload via a new Web-based upload tool. Check the tool and FAQ.

It’s invite-only for now. A “small group of artists” has access for testing and feedback, Spotify says.

It won’t cost anything, and access to releases will be streamlined. No fees, the full commission – the deal is better financially. And you’ll be able to edit releases and delete music, which can be a draconian process now through distributors.

Regions are a big question. The tax section currently refers to the W9 – a tax form in use in the USA. So clearly the initial test is US-only; we’ll see what the plans are for other regions.

You have to look into the future before this really starts to matter, because it is so limited. But it could be a sign of things to come. And bottom line, Spotify can give you a better experience of what your music will be like on Spotify than anyone else can:

You’ll be able to deliver music straight to Spotify and plan for the perfect release day. You’ll see a preview of exactly how things will appear to listeners before you hit submit. And even after your music goes live, you’ll be in full control of your metadata with simple and quick edits.

Just like releasing through any other partner, you’ll get paid when fans stream your music on Spotify. Your recording royalties will hit your bank account automatically each month, and you’ll see a clear report of how much your streams are earning right next to the other insights you already get from Spotify for Artists. Uploading is free to all artists, and Spotify doesn’t charge you any fees or commissions no matter how frequently you release music.

Now in Beta: Upload your music in Spotify for Artists [Spotify Artist Blog]

The question really is how far they’ll expand, and how quickly. If they use all of Spotify for Artists, as their blog news item would seem to imply, then some 200,000 or so verified artist accounts will get the feature. (I’m one of those accounts.) 200,000 artists with direct access to Spotify could change the game for everyone.

The potential losers here are clear. First, there are the distributors. So-called “digital distribution” at this point really amounts to nothing of the sort. While these third parties will get your music out to countless streaming services, for most artists and labels, only the big ones like iTunes and Spotify count to most of their customers. At the entry level, these services often carry hefty ongoing subscription fees while providing little service other than submitting your music. More personalized distributors, meanwhile, often require locking in multi-year contracts. (I, uh, speak from experience on both those counts. It’s awful.)

Even the word “distributor” barely makes sense in the current digital context. Unlike a big stack of vinyl, nothing is actually really getting distributed. More complete management and monetization platforms actually do make sense – plus tools to deal with the morass of social media. Paying a toll to a complicated website to upload music for you? That defies reason.

The second potential loser that comes to mind is obviously SoundCloud. Once beloved by independent producers and labels, that service hasn’t delivered much on its promise of new features for its creators. (Most recently, they unveiled a weekly playlist that seems cloned from Spotify’s feature.) And SoundCloud’s ongoing popularity with users was dependent of having music that couldn’t be found elsewhere. If artists can upload directly to Spotify, well … uh, game over, SoundCloud. (Yeah, you still might want to upload embeddable players and previews but other services could do that better.)

Just keep in mind: Spotify for Artists was 200,000 users at the beginning of summer. At least as of 2014, SoundCloud was creating 10 million creators. So it’s not so much SoundCloud losing as it is another sign that SoundCloud won’t really take on Spotify – just as Spotify (even with this functionality) really doesn’t even attempt to take on SoundCloud. They’re different animals, and it’s frustrating that SoundCloud hasn’t done more to focus on that difference.

But all this still remains to be seen in action – it’s just a beta.

Just remember how this played out the first time. Spotify reached a critical mass of streaming, and Apple followed. If Spotify really are doing uploads, it’d make sense for Apple to do the same. After all, Apple makes the hardware (MacBook Pro, iPad) and software (GarageBand, Logic Pro X) a lot of musicians are using. And they tempted to capitalize on their strong relationships with artists once, with the poorly designed Connect features (touted by Trent Reznor, no less). They just lag Spotify in this area – with the beta Apple Music for Artists and Apple Music Toolbox.

Meanwhile, I wouldn’t write off labels or genre-specific stores just yet. If you’re making music in a genre for a more specific audience, dumping your music on Spotify where it’s lost in the long tail is probably exactly what you don’t want to do. Streaming money from the big consumer services just isn’t reaching lesser known artists the way it is the majors and big acts. So I suspect that perversely, the upload feature could lead to an even closer relationship between, say, electronic label producers and labels and services tailored to their needs, like Beatport. (We’re waiting on Beatport’s own subscription offerings soon.)

But does this make sense? It sure does for the streaming service. Giving the actual content makers the tools to upload and control tags and other data should actually reduce labor costs for streaming services, entice more of the people making music, and build catalogs.

And what about you as a music maker? Uh, well… strap in, and we find out.

The post Upload music directly to Spotify: streaming giant goes in new direction appeared first on CDM Create Digital Music.

Watch The Black Madonna DJ live from … inside a video game

Algorithmic selection, soulless streaming music, DJ players that tell you what to play next and then do it for you… let’s give you an alternative, and much more fun and futuristic future. Let’s watch The Black Madonna DJ from inside a video game.

This is some reality-bending action here. The Black Madonna, an actual human, played an actual DJ set in an actual club, as that entire club set was transformed into a virtual rendition. That in turn was then streamed as a promotion via Resident Advisor. Eat your heart out, Boiler Room. Just pointing cameras at people? So last decade.

From Panorama Bar to afterhours in the uncanny valley:

This is less to do with CDM, but… I enjoy watching the trailer about the virtual club, just because I seriously never get tired of watching Marea punching a cop. (Create Digital Suckerpunches?)

Um… apologies to members of law enforcement for that. Just a game.

So, back to why this is significant.

First, I think actually The Black Madonna doesn’t get nearly the credit she deserves for how she’s been able to make her personality translate across the cutthroat-competitive electronic music industry of the moment. There’s something to learn from her approach – to the fact that she’s relatable, as she plays and in her outspoken public persona.

And somehow, seeing The Black Madonna go all Andy Serkis here puts that into relief. (See video at bottom.) I mean, what better metaphor is there for life in the 21st century? You have to put on a weird, uncomfortable, hot suit, then translate all the depth of your humanness into a virtual realm that tends to strip you of dimensions, all in front of a crowd of strangers online you can’t see. You have to be uncannily empathic inside the uncanny valley. A lot of people see the apparent narcissism on social media and assume they’re witnessing a solution to the formula, when in fact it may be simply signs of desperation.

Marea isn’t the only DJ to play Grand Theft Auto’s series, but she’s the one who seems to actually manage to establish herself as a character in the game.

To put it bluntly: whatever you think of The Black Madonna, take this as a license to ignore the people who try to stop you from being who you are. It’s not going to get you success, but it is going to allow you to be human in a dehumanizing world.

And then there’s the game itself, now a platform for music. Rockstar Games have long been incurable music nerds – yeah, our people. That’s why you hear well curated music playlists all over the place, as well as elaborate interactive audio and music systems for industry-leading immersion. They’re nerds enough that they’ve even made some side trips like trying to make a beat production tool for the Sony PSP with Timbaland. (Full disclosure: I consulted on an educational program around that.)

This is unquestionably a commercial, mass market platform, but it’s nonetheless a pretty experimental concept.

Yes, yes – lots of flashbacks to the days of Second Life and its fledgling attempts to work as a music venue.

The convergence of virtual reality tech, motion capture, and virtual venues on one hand with music, the music industry, and unique electronic personalities on the other I think is significant – even if only as a sign of what could be possible.

I’m talking now to Rockstar to find out more about how they pulled this off. Tune in next time as we hopefully get some behind-the-scenes look at what this meant for the developers and artists.

While we wait on that, let’s nerd out with Andy Serkis about motion capture performance technique:

The post Watch The Black Madonna DJ live from … inside a video game appeared first on CDM Create Digital Music.

Eurorack’s prices are dropping, as Herr Schneider laments

With the proliferation of modules, the phrase “Eurorack bubble” has been floating around for a while. But now it appears to be translating into falling prices.

The basic problem is this: more demand means more interest, which translates into more manufacturers, and more production. So far, so good. Then, more distributors pick up the goods – not just boutique operators like Schneider, but also bigger chains.

Where’s the problem? With too many modules out there in the marketplace, and more big retailers, it’s easier for the big retailers to start to squeeze manufacturers on price. Plus, the more modules out in the world, the greater the supply of used modules.

Andreas Schneider has chosen to weigh in on the issue personally. You can read his statement in German:

Jetzt auch XAOC bei Thomann ..

And in an English translation (with more commentary by Schneiderladen in English):

HerrSchneiders statement on current developments in the Eurorack market [stromkult]

There’s actually a lot there – though the banner revelation is seeing the cost of new modules suddenly plummet by 30%:

You asked for it: Due to the increased demand for Eurorack modules in Europe, even the large retailers for musical instruments are now filling the last corners of their warehouses and buying complete production runs from manufacturers and everything else they can get. Some manufacturers might be happy about this, but the flooding of the market already leads to a significant drop in prices here and there, some modules are already available with a 30% discount on the original calculated price and yet were still quite hot the other day!

As SchneidersLaden we have decided to go along with this development and of course offer corresponding products for the same price to our customers, although most of them have already bought them when the goods were still fresh and crisp! We’re almost a little sorry about that, but hopefully the hits are already produced and the music career is up and running? Nevertheless, sorry – but the decision for this way lies with the manufacturer and was not our recommendation!

By the way… we don’t advertise with moneyback-warranty… we’ve always practiced it. But please: get advice first, then buy – like in the good old days. Because it’s better to talk to your specialist retailer – we know what we are selling. And by the way: We do free shipping throughout Europe and there are Thursdays on that we are in the shop until nine o’clock in the evening …and real CHAOS serves creativity.

That had to be said – end of commercial break.

Okay, so some different messages. To manufacturers, with whom Schneider seems to place a lot of the blame, the message is to avoid glutting the market by selling so many units that then they lose their price margin. (That seems good advice.) There’s also a “dance with the one that brung you” attitude here, but that’s probably fair, as well.

To buyers, work with specialists, and please research what you buy so you don’t shoulder retailers and manufacturers with lots of returns. That seems good advice, too.

(Hope I’ve paraphrased that fairly.)

It does seem there’s a looming problem beyond just what’s here, though. For the community to continue to expand, it will have to find more new markets. It does seem some saturation point is inevitable, and that could mean a shakeout of some manufacturers – though that isn’t necessarily a bad thing. The used market should also be a worry, though on the other hand, some people do always seem to buy new.

I’d echo what the two posts here say, which is the synth maker world will likely be healthy if manufacturers and consumers do some research and support one another.

Before anyone predicts the sky is falling, I’ve had a number of conversations with modular makers. Those with some experience seem to be doing just fine, even if some have expressed concern about the larger market and smaller and newer makers. That is, those with some marketing experience and unique products still see growth – but that growth may not translate to greener manufacturers who are trying to cram into what is becoming a crowded field.

Other thoughts? Let us know.

The post Eurorack’s prices are dropping, as Herr Schneider laments appeared first on CDM Create Digital Music.

Synths may be spared worst of US trade war – for now

Following Moog Music’s alarmed email regarding US trade policy, some in the synth industry have responded that the immediate impact on manufacturers will be minimal.

Okay, so what’s going on?

The matter of discussion is still a document by the US Trade Representative regarding proposed tariffs or import taxes. These are 25% additional tariffs imposed by the USA on Chinese goods as they’re imported into the United States.

This document has changed over the past months. But the USTR does provide a public comment period for any changes – meaning, while these tariffs are set to go into effect this Friday the 6th of July, theoretically there shouldn’t be any additional changes.

And that’s where there’s a legitimate problem with the way Moog Music – and my own writing here on CDM – presented the problem.

Paul Schreiber, an engineer who has worked with multiple companies in the industry, posted a heated rebuttal to the Moog letter. That was not necessarily to defend Trump administration policy, but rather to suggest that Moog and others may have overreacted or mischaracterized the immediate realities of the policy.

See, previously:
Moog urges US citizens to take action to stop Trump import tax

Long story short: the idea is, the tariffs apply only to small components, like LEDs and potentiometers, but not to more significant expenses like the “circuit boards” Moog mentioned in their email.

And in fact, the cost of those really shouldn’t significantly impact the cost of US-made products, including Moog’s – even on an instrument that’s covered in LEDs and stuffed with circuits, those particular parts make up a relatively small portion of the cost. They’re not meaningless – shaving dollars and even cents off individual components is a pretty major part of the design process. But they’re not the sort of thing that would disrupt jobs or hurt the economy.

The area of confusion may be around circuit boards, as Schreiber observes – and I’m forced to admit, I agree with his assessment. He writes in a follow-up post:

If you search the tariff PDF for ‘printed circuit assemblies’, you get many hits (ATM machines, radiation detectors, etc) and here in Section 90, this one listing.
The ‘issue’ is that the listing of the tariff codes are an ABBREVIATED DESCRIPTION, not ‘as formally written’ in the ACTUAL codes.
The 9030 section of Chapter 90 is SPECIFICALLY talking about oscilloscopes. And this 9030.90.68 is referring to a non-US company, importing a ‘kit of parts’ into the USA, including a stuffed pc board, and then building a scope in the USA.

That’s not necessarily a definitive list, and it is open to interpretation but … I do tend to agree with this interpretation, unless someone can present a compelling alternative reading.

There are still reasons for the electronic musical instrument building community to be concerned. An escalating trade war between the USA and its trading partners could pose unexpected problems in the near future. And if these trading difficulties hurt the US economy, that impact could be felt, too. But it’s important to separate that from the immediate impact on making synths, which for the moment may indeed be negligible.

Other industries have greater cause to worry. The US automakers in particular are seriously concerned about costs for raw materials and retaliatory penalties abroad – but they’re impacted differently than US synthmakers are. Agriculture are concerned, too, as punitive measures cut off markets they need for exports. (And, okay, yes, synthesizers make up a much smaller part of the US economy than cars or agriculture, obviously. I guess we still have work to do? Or we have to figure out how you can ride synthesizers to different places, or … eat them.)

The DIY community I shared in my original post are harder hit, too, as a lot of their products are just these components – see Boing Boing’s story on maker products.

And there’s the fact that the US President is saying threatening things about the EU in general.

But in a heated political climate, it’s important to separate long-term risks from immediate problems, and to keep concerns in scale. For now, it’s reasonable for makers like Moog to protest isolationist or protectionist US trade policy, or heated up trade rhetoric and potential trade wars. But the rules going into effect this week, when viewed just inside the context of our industry, likely aren’t catastrophic – not yet.

I’m awaiting further comment from Moog on their activism and will update this story when that’s available.

Feature photo (CC-BY Paul Downey.

The post Synths may be spared worst of US trade war – for now appeared first on CDM Create Digital Music.