We’re living in an age of video and motion graphics. But now not only can you get a free license of Davinci Resolve to use pro-level tools, but this hack will let you make a standard music controller do a convincing impression of a $30,000 controller. Finally, visuals get as easily hands-on as music.
The Tacyhon Post has a bunch of excellent tools for users of Davinci Resolve. (Resolve is the editor / motion graphics / post tool from Blackmagic. It’s a pro-grade tool, but you can use a free license.) But most intriguing are controller mappings for the Akai APC40 and original Arturia Beatstep. If you don’t have an APC40 already, for instance, that’s an inexpensive used buy. (And maybe this will inspired other mappings, too.)
The APC mapping is the most interesting. And it’s ridiculous how much it does. Suddenly color grading, shapes and motion, tracking and all the editing functions are tangible controls. THe developer has also added in mappings for Resolve FX. And it’s updated for the latest version, Resolve 15, released this summer.
The Beatstep version is pretty cool, as well, with similar functionality to the APC. This isn’t the Beatstep Pro but the “vintage” Beatstep. Unlike the APC, that controller hasn’t had quite the staying power on the music side – the Pro version was much better. But that means it’s even better to repurpose it for video, and of course then you have an effective mobile solution.
If you’re the sort of person to drop 30 grand on the actual controller, this probably isn’t for you. But what it does is to liberate all those workflows for the rest of us – to make them physical again. The APC is uniquely suited to the task because of a convenient layout of buttons and encoders.
I’m definitely dusting off an APC40 and a forgotten Beatstep to try this out. Maybe if enough of us buy a license, it’ll prompt the developer to try other hardware, too.
Super custom edition by the script developer, with some hardware hacks and one-off paint job. Want.
Meanwhile, where this really gets fun is with this gorgeous custom paint job. DIY musicians get to be the envy of all those studio video people.
One upon a time, there was a Novation keyboard called the ReMOTE SL. That’s as in “remote control” of software. Times have changed, and you’ve got a bunch of gear to connect – and you may want your keyboard to work standalone, too. So meet the SL MkIII.
The additional features are significant enough that Novation is dropping the “remote” from the name. Now it’s just SL, whatever those letters are meant to stand for.
The story here is, you get a full-featured, eight-track sequencer – so you no longer have to depend on a computer for that function. And Novation promise some higher-spec features like expanded dynamic range (via higher scan rate). With lots of keyboards out there, the sequencer is really the lead. Circuit just paid off for keyboardists. Novation gets to merge their experience with Launchpad, with Circuit, with Web connectivity, and with analog and digital gear.
The 8-track, polyphonic sequencer is both a step and live sequencer, it records automation, and you can edit right from the keyboard.
Arpeggiator onboard, too.
USB, MIDI in, MIDI out, second MIDI thru/out
Clock/transport controls for MIDI and analog, which also run standalone – route that to whatever you like.
Three pedal inputs
Eight faders and eight knobs, handy for mixing (there’s DAW support for all major DAWs, plus dedicated Logic and Reason integration)
RGB everything: yep, over the keys, but also color-coded RGB on the pitch and mod wheel as track indicators. (I’m waiting for someone to release a monochromatic controller. You know it’s coming … back.)
Those RGB pads are not just velocity sensitive, but even have polyphonic aftertouch (more like higher-end dedicated pad controllers)
Cloud backup/restore of templates and sessions – a feature we saw unveiled on Novation’s Circuit
And of course there’s more mapping options with their InControl software and Mackie HUI support.
(Some notes from the specs: you do need separate 12V power, so you can’t use USB power. I don’t have weight notes yet, either.)
Novation must know a lot of their customer base use Ableton Live, as they’re quick to show off how their integation works and why those screens are handy.
Here it is in action:
We also see some cues from Native Instruments’ keyboards – the light guide indicators above the keys are copied directly, and while the pads and triggers are all Launchpad in character, we finally get a Novation keyboard with encoders and graphic displays. Unlike NI, this keyboard is still useful when the computer is shut off, though.
And wait – we’ve heard this before. It was called the AKAI Pro MAX25 and MAX49 – step sequencer built in (with faders and pads), plus MIDI, plus CV, plus remote control surface features. You just had to learn to like touch strips for the faders, and that garish racecar red. That AKAI is still worth a look as a used buy, though the hardware here is in a more standard layout / control complement, and a few years later, you get additional features.
The big rival to the Novation is probably Arturia’s KeyLab MKII. It also strikes a balance between studio hub and controller keyboard, and it comes from another maker who now produces analog synths, too. But the Novation has a step sequencer; Arturia makes step sequencers but left it out of their flagship controller keyboard.
Oh yeah, and if you just wanted an integrated controller keyboard for your DAW, Nektar have you covered, or of course you can opt for the Native Instruments-focused Komplete Kontrol. Each of those offerings also got revisions lately, so I’m guessing … a lot of people are buying keyboards.
But right now, Novation just jumped out to the front of the pack – this keyboard appears to tick all the boxes for hardware and software. And I’ll bet a lot of people are glad to do some sequencing without diving into the computer. (Even alongside a computer for tracking, that’s often useful.)
£539.99 49 keys; £629.99 61. (Both share the same layout.)
It’s been a decades-long wait, but now Moog have revealed a flagship polyphonic keyboard instrument – a new dream synth. It’s high-end, for sure, but it also reveals where the brand that became synonymous with synthesis sees us going next. We’ve talked to Moog to find out more on today, release day.
The last time Moog made a polysynth, Ronald Reagan was President, the Space Shuttle was the epitome of futuristic, MIDI wasn’t really even a thing, and to slightly misquote Douglas Adams, people were “so amazingly primitive that they still thought digital watches were a pretty neat idea.”
And let’s be honest. While Moog have been studiously revisiting the evolution of their polyphonic instruments, Moog are known for their monosynths, not polysynths. This could change that. Sure, the Moog One is expensive – you might still choose a poly from Novation, KORG, Arturia, or fellow American brand Sequential (now renamed to its original moniker from Dave Smith Instruments).
But it’s also beautiful, and deep. It’s going to top the wanted list of rockstars again, maybe in a way we haven’t seen since the 80s – as proven by the promo video (some of which feature those same 80s synth superstars). If we still cared about print magazines graced by keyboard covers, this would have a glossy special edition devoted to it with a pull-out centerfold that let you lie in bed and stare at its front panel on your ceiling.
As for the “One” part, well, that’s more about it being the one, as in:
— well, except instead of Wayne, apparently Suzanne Ciani and Chick Corea reached that conclusion.
To celebrate, Moog have rebooted their 1976 Polymoog promo film, this time with Jeff Bhasker, Suzanne Ciani, Chick Corea, Mike Dean, Robert Glasper, Dick Hyman, Dev Hynes, Mark Mothersbaugh, Mark Ronson, Ryuichi Sakamoto, Dr. Lonnie Smith, and Paris Strother. (Hey, you left out the ghost of Liberace and the Queen of England. That’s a Jerry Lewis telethon-level cast right there.)
And given the price is $6k or $8k list, you’ll probably want to know more. So Moog are doing a first-ever AMA (Ask Me Anything) on Reddit:
Plus there’s a live stream of them building these (with another discussion to follow):
About the synth
So, what’s the big deal about this big synth?
It’s really the blockbuster follow-up to everything Moog have been doing – take the Minimoog Voyager, then make each single analog signal path more powerful, multiple that times 8- or 16-voices (depending on which model you buy), and then turn that into three independent polysynths.
That is, the “tri-timbral” part means that you could think of this as three analog polysynths in one. Each timbre can be addressed separately, with its own sequencer, its own arpeggiator, and its own set of effects.
Three all new dual-output analog VCOs
Ring modulation and FM
Two independent analog filters
Dual-source analog noise generator
Analog mixer with external audio input
Three envelope generators
Effects, including Eventide reverbs (more on that below)
Preset recall, with 64 performance-fiendly presets loaded right from the front panel (and thousands more via the browser)
200 front panel knobs and switches
Mod Matrix for visual modulation patching (also more on that below)
Easy-access “Destination” button – hit it, tweak something, and you get instant assignment
Now, all of this matters, if you think about it.
What’s the reason people are into hardware? Easy: hands-on control. And this has a lot of it.
But why are people also buying modular? Well, in part, at least, they want deeper sound design possibilities – complex modulation that allows more sound worlds. And this does deliver a lot of that via its voice architecture and modulation offerings.
Why did manufacturers start making keyboards and not only modulars – even for people who had been big modular users? That’s easy, too – modulars don’t give you instant performance recall, and they’re (by definition) not integrated instruments. This does both of those things.
But we also see the advantage of time. We’ve come full circle to lots of one-to-one performance controls. But we also can take advantage of an integrated display, without trying to use it to replace knobs and switches. We’ve become more allergic to menu diving and hidden features. And computers have made us demand more of hardware – like those instant-assign destination buttons. This is a Moog for a time when hands-on control and depth aren’t mutually exclusive.
Let’s ask Moog
I wanted to know more about how the Moog One came about and how you play it, so here are some answers to those questions – though for more, of course, you can join the AMA thread.
Making a new polysynth was unsurprisingly on the minds at Moog. “Moog has a long history of polyphonic synthesizer development, beginning with the Moog Apollo project in 1973,” Moog tells CDM. “Although the Apollo never moved beyond the prototype stage, Keith Emerson’s use of the newly designed instrument during ELP’s Brain Salad Surgery Tour provided Moog with valuable feedback for the release of the Polymoog in 1975. During this 10 year span, 6 different takes on a polyphonic instrument where created, ending with the Moog SL-8 prototype in 1983.”
Players have never stopped asking for polys, nor has the idea ever died, Moog tell us. Some resistance came from founder Bob Moog himself, however: “In his later years, Bob was not keen on the idea of a new Moog polyphonic synth, knowing firsthand the challenges of creating one, but over the years we have been able to substantially reduce costs and have increased the stability of our analog designs to the point that creating an analog poly no longer seemed out of reach.”
So when did the Moog One start to come into being. “Officially, we began the research phase in earnest in 2013,” say Moog, “talking with artists and creators about what their vision of the ultimate Moog synthesizer would be.”
“By 2016,” Moog says, “we had the first hardware prototypes for the circuitry, with the first stages of a working Moog One prototype taking form in early 2017. Now that the Moog One has been realized, we only wish that Bob Moog was here to play the first chord.”
Okay, so how does it actually work, though? More details:
How modulation works:
Each of the Moog One’s 4 LFOs and 3 EGs have their own dedicated Destination Buttons for making modulation quick assignments on the front panel. Simply press the Destination on any LFO or EG, and the next knob you touch will set the modulation destination and amount.
For a modulation deep dive, the onboard Modulation Matrix provides immediate visual access to every possible combination of Moog One’s modulation sources, destinations, controllers, and transforms. The Modulation Matrix makes it easy to quickly program complex modulation paths while also giving an overview of all the modulation routings that have been set up in a given Preset.
What about the Eventide reverbs?
It sounds like two come from favorite algorithms known on the Eventide SPACE and related products:
Moog One was developed to explore what is possible in a polyphonic synthesizer, and Eventide’s breath taking reverb technology was the right fit. The Room, Hall, Plate, Blackhole, and Shimmer reverbs are all implemented using Eventide’s world-class algorithms with a few optimizations for use in Moog One.
A direct connection to service
There are some other changes coming, too. Moog are adding a chat feature so during business hours – 9-5 Monday through Friday Eastern Time – you’ll be able to ask questions of Moog staff in North Carolina, in real-time. (They’re quick to remind us those are “employee owners.”)
And there’s also that mysterious Ethernet port on the Moog One. From day one, it’s there for remote diagnostics and service. But more is coming:
Now, when a musician experiences issues that typically would require shipping an instrument back to the Moog factory, we are instead able to access their Moog One remotely and run a series of tests, calibrations, and whatever else may be necessary to best service their instrument remotely, which is a huge advancement and time saver for customer, dealer and manufacturer. While we can’t talk specifics regarding future product development, we can tell you that we have plans for the Ethernet port that will open new portals of creativity for Moog One owners.
Above, top: inside the Moog factory, as the first Moogs One are completed.
Moog One is out now, for real:
As of today, Moog One is available for order through all authorized Moog Dealers world wide. You can actually watch us building the Moog One right now through the live-stream player on the Moog website. Sweetwater will receive the first 150 units over the next few weeks, and we expect to begin shipping the Moog One to all US dealers in November, with international shipments starting shortly there after.
And what about those of us with budgets the Moog One doesn’t fit?
I had to ask Moog this, too – a lot of us are more in the market for $600 instruments than $6000. So what does this mean for us?
When we began development of the first polyphonic Moog analog synthesizer in over 35-plus years, we wanted it to be a dream-synth that pushed the limits of what is technically possible while still being an intuitive instrument for self-expression. This year we’ve released DFAM, Grandmother, and the Moog One, which are three instruments that cover a wide range of creative possibilities.
That’s fair, I think. As I’ve observed before, Moog have kept a range of products in reach of those on a budget – down to very affordable iPad/iPhone apps, but also including this other hardware. They’re releasing a fair number of products for a mid-sized manufacturer (compared to tiny boutique shops at one end, or mighty Japanese makers at the other). And since they first came up with their crazy Keith Emerson modular relaunch, while we have seen big-ticket rockstar items, those do appear to drive creation of more affordable analog gear and other devices and apps for the rest of us.
The Moog One will have a lot to live up to, because of its price, because of its obvious ambition, but mostly because of its name. But this looks tantalizing – a Moog poly that could be worth the wait.
Ableton Live: lug along hardware, or … be forced to use a mouse or touchpad. No more: touchAble Pro continues to unlock more and more of Live’s functionality, and now it’s available across touch platforms – iOS, Android, Windows.
That last bit in itself is already news. iPad owners have had plenty of great stuff, but … what if you’ve got an Android phone instead of an iPhone? Or a Microsoft Surface? Or what if you want controls to jam on a big touchscreen display – in the studio, for instance?
It’s possible to target all three of those platforms; the fact many developers haven’t tells you they haven’t yet figured out the business case. But with Ableton Live a massive platform, numbering millions of active users, and use cases that focus on making things happen, uh, “live,” the touchAble devs could have a winner.
And whichever platform you choose, there’s simply no way to put this much control of Ableton Live at your fingertips, with this much visual feedback. We covered this release in full earlier:
But here’s a recap of why it’s cool, whether you’re a returning user or new to the platform:
Piano Roll editing (top), and custom Devices (bottom).
Audio clip view with waveforms, including side-by-side waveforms
Piano roll view for pattern editing
Draw and edit automation
Custom layouts with Template Editor
Custom Device templates (even with third-party plug-ins and Max for Live, via an In-App Purchase coming soon)
And this matters. Now you can quickly whip up a custom template that shows you just what you need to see for a live performance – without squinting (it’s all scalable). Add in side-by-side waveforms to that, and you could twist Live into a DJ tool – or certainly a more flexible live performance tool, especially if you’ve got other instruments or vocals to focus on.
Plus a lot of other good stuff:
Transport, metronome, cues, and quantization
Clips and scenes and control looping
Arm, mute, and solo tracks
Mix, pan, crossfade, and control sends and returns
Play instruments with grid or piano-style layouts, with scales, note repeat, aftertouch, and velocity (based on finger position)
Control device parameters, using faders or assignable X/Y pad modules
X/Y Pad: assign physics, make and morph snapshots or record full gestures,
Navigate Live’s Browser, and drag and drop Devices or Samples to the set
Enlarge stuff – like this clip overview – and make the custom layout you need.
Side by side waveforms, and a bunch of clip options. Oh yeah.
Touch on Windows isn’t just about devices like Surface – it’s also big touch-equipped displays, so ideal for studio work.
Three new videos are out now to walk you through how it’s all working.
What if music were made mechanically, with giant wheels and bellows and valves? The Mammoth Beat Organ makes that happen, using parts from toilets, a hearse, and a treadmill.
Yes, it has balloons connected by tubes and something called a “wind sequencer” with pegs and … it sounds like a calliope that’s gone a bit mental. And it comes with roll-on “modules” so you can add different layers of sound (like mechanically played drums). Watch:
It’s the Dunning Underwood Mammoth Beat Organ, the creation of two wild musical minds – Sam Underwood and Graham Dunning – in their first collaboration. It has the sonic thinking of the Giant Feedback Organ (Underwood) and the mechanical performance approach of Mechanical Techno (Dunning). And accordingly, it’s even meant to be a two-player contraction, involving both artists.
That performance spectacle is really part of the magic, as components are wheeled around and bits and bobs added and subtracted. Having seen Graham’s live show, that performance energy drives things in a way different than you’d get from just an installation – it has improvisation in it.
More on how this works – in particular, still more deep research into historical instruments and the alternative histories it suggests, and how they incorporated the back of a hearse and a treadmill into construction:
This project is just getting going, so it’ll be fun to watch it evolve – especially if we get to see it in person.
It’s worth noting that they talk about the need to have years and years to continue building and rehearsing with the invention. We of course value novelty in tech, but that’s telling, whatever your fantasies are (whether large and mechanical or compact and digital or anything else). So I do hope they’ll keep us posted as they continue developing, and as they use this instrument to spark new creative directions in their own imaginations.
The video at top is shot and explained by Michael Forrest of Michael & Ivanka’s Grand Podcast – well worth a listen: http://grandpodcast.com
I’m not a fan of YouTube and the next videos it plays, but following this with Sir Simon Rattle conducting Chariots of Fire with Mr. Bean sure as hell works. In case you need some motivation for today’s soldering / hammering DIY instruments, have at it.
Max 8 is released today, as the latest version of the audiovisual development environment brings new tools, faster performance, multichannel patching, MIDI learn, and more.
MC multichannel patching.
It’s always been possible to do multichannel patching – and therefore support multichannel audio (as with spatial sound) – in Max and Pure Data. But Max’s new MC approach makes this far easier and more powerful.
Any sound object can be made into multiples, just by typing mc. in front of the object name.
A single patch cord can incorporate any number of channels.
You can edit multiple objects all at once.
So, yes, this is about multichannel audio output and spatial audio. But it’s also about way more than that – and it addresses one of the most significant limitations of the Max/Pd patching paradigm.
Synthesis approaches with loads of oscillators (like granular synthesis or complex additive synthesis)? MC.
MPE assignments (from controllers like the Linnstrument and ROLI Seaboard)? MC.
MC means the ability to use a small number of objects and cords to do a lot – from spatial sound to mass polyphony to anything else that involves multiples.
It’s just a much easier way to work with a lot of stuff at once. That was present in open code environment SuperCollider, for instance, if you were willing to put in some time learning SC’s code language. But it was never terribly easy in Max. (Pure Data, your move!)
Mappings lets you MIDI learn from controllers, keyboards, and whatnot, just by selecting a control, and moving your controller.
Computer keyboard mappings work the same way.
The whole implementation looks very much borrowed from Ableton Live, down to the list of mappings for keyboard and MIDI. It’s slightly disappointing they didn’t cover OSC messages with the same interface, though, given this is Max.
Max 8 has various performance optimizations, says Cycling ’74. But in particular, look for 2x (Mac) – 20x (Windows) faster launch times, 4x faster patching loading, and performance enhancements in the UI, Jitter, physics, and objects like coll.
Also, Max 8’s Vizzie library of video modules is now OpenGL-accelerated, which additionally means you can mix and match with Jitter OpenGL patching. (No word yet on what that means for OpenGL deprecation by Apple.)
There’s full NPM support, which is to say all the ability to share code via that package manager is now available inside Max.
Patching works better, and other stuff that will make you say “finally”
Actually, this may be the bit that a lot of long-time Max users find most exciting, even despite the banner features.
Patching is now significantly enhanced. You can patch and unpatch objects just by dragging them in and out of patch cords, instead of doing this in multiple steps. Group dragging and whatnot finally works the way it should, without accidentally selecting other objects. And you get real “probing” of data flowing through patch cords by hovering over the cords.
There’s also finally an “Operate While Unlocked” option so you can use controls without constantly locking and unlocking patches.
There’s also a refreshed console, color themes, and a search sidebar for quickly bringing up help.
And additionally, essential:
High definition and multitouch support on Windows
UI support for the latest Mac OS
And of course a ton of new improvements for Max objects and Jitter.
What about Max for Live?
Okay, Ableton and Cycling ’74 did talk about “lockstep” releases of Max and Max for Live. But… what’s happening is not what lockstep usually means. Maybe it’s better to say that the releases of the two will be better coordinated.
Max 8 today is ahead of the Max for Live that ships with Ableton Live. But we know Max for Live incorporated elements of Max 8, even before its release.
For their part, Cycling ’74 today say that “in the coming months, Max 8 will become the basis of Max for Live.”
Based on past conversations, that means that as much functionality as possibly can be practically delivered in Max for Live will be there. And with all these Max 8 improvements, that’s good news. I’ll try to get more clarity on this as information becomes available.
Max 8 now…
Ther’s a 30-day free trial. Upgrades are US$149; full version is US$399, plus subscription and academic discount options.
Full details on the new release are neatly laid out on Cycling’s website today:
Deep in the Arctic Circle, the USSR was drilling deeper into the Earth than anyone before. One artist has combined archaeology and invention to bring its spirit back in sound.
Meet SG-3 (СГ-3) — the Kola Superdeep Borehole. You know when kids would joke about digging a hole to China? Well, the USSR’s borehole got to substantial depths – 12,262 m (over 40,000 ft) at the time of the USSR’s collapse.
The borehole was so epic – and the Soviets so secretive – that it has inspired legends of seismic weapons and even demonic drilling. (A YouTube search gets really interesting – like some people who think the Soviets actually drilled into the gates to Hell.)
Artist Dmitry Morozv – ::vtol:: – evokes some of that quality while returning to the actual evidence of what this thing really did. And what it did is already spectacular – he compares the scale of the project to launching humans into space (well, sort of in the opposite direction).
vtol’s installation 12262 is the perfect example of how sound can be made material, and how digging into history can produce futuristic, post-contemporary speculative objects.
The two stages:
Archaeology. Dima absorbed SG-3’s history and lore, and spent years buying up sample cores at auctions as they were sold off. And twice he visited the remote, ruined site himself – once in 2016, and then back in July with his drilling machine. He even located a punched data tape from the site, though of course it’s difficult to know what it contains. (The investigation began with the Dark Ecology project, a three-year curatorial/research/art project bringing together partners from Norway, Russia, and across Europe, and still bearing this sort of fascinating fruit.)
Invention: The installation itself is a kinetic sound instrument, reading the coded information from the punch tape and operating miniature drilling operations, working on actual core samples. The sounds you hear are produced mechanically and acoustically by those drills.
As usual, Dima lists his cooking ingredients, though I think the sum is uniquely more than these individual parts. It’s as he describes it, a poetic, kinetic meditation, evocative both intellectually and spiritually. That said, the parts:
Commission by NCCA-ROSIZO (National Centre for Contemporary Arts), special for TECHNE “Prolog” exhibition, Moscow, 2018.
Curators: Natalia Fuchs, Antonio Geusa. Producer: Dmitry Znamenskiy.
The work was also a collaboration with Gallery Ch9 (Ч9) in Murmansk. That’s itself something of an achievement; it’s hard enough to find media art galleries in major cities, let alone remote Russia. (That’s far enough northwest in Russia that most of Finland and all of Sweden are south of it.)
But the alien-looking object also got its own trip to the site, ‘performing’ at the location.
It’s appropriate that would happen in Russia. Cosmism visionary Nikolai Fyodorovich Fyodorov and his ideas about creating immortality by resurrecting ancestors may seem bizarre today. But translate that to media art, which threatens to become stuck in time when not informed by history. (Those who do not learn from history are doomed to make installation art that looks like it came from a mid-1990s Ars Electronica or Transmediale, forever, I mean.) To be truly futuristic, media art has to have a deep understanding of technologies progression, its workings, and all the moments in the past that were themselves ahead of their time. That is, maybe we have to dig deep into the ground beneath us, dig up our ancestors, and construct the future atop that knowledge.
At Spektrum Berlin this weekend, there’s also a “materiality of sound” project. Fellow Moscow-based artist Andrey Smirnov will create an imaginative new performance inspired by Theremin’s infamous KGB listening device of the 1940s – also new art fabricated from Soviet history – joined by a lineup of other artists exploring similar themes making sound material and kinetic. (Evelina Domnitch and Dmitry Gelfand, Sonolevitation, Camera Lucida, Eleonora Oreggia aka Xname share the bill.)
To me, these two themes – materiality, drawing from kinetic, mechanical, optical, and acoustic techniques (and not just digital and analog), and archaeological futurism, employing deep historical inquiry that is in turn re-contextualized in forward-thinking, speculative work, offer tremendous possibility. They sound like more than just a zeitgeist-friendly buzzword (yeah, I’m looking at you, blockchain). They sound like something to which artists might even be happy to devote lifetimes.
For another virtual trip to the borehole, here’s Rosa Menkman’s film on a soundwalk at the site in 2016.
Related (curator Natalia Fuchs, interviewed before, also curated this work):
Espills is a “solid light dynamic sculpture,” made of laser beams, laser scanners, and robotic mirrors. And it makes a real-life effect that would make Tron proud.
The work, made public this month but part of ongoing research, is the creation of multidisciplinary Barcelona-based AV team Playmodes. And while large-scale laser projects are becoming more frequent in audiovisual performance and installation, this one is unique both in that it’s especially expressive and a heavily DIY project. So while dedicated vendors make sophisticated, expensive off-the-shelf solutions, the Playmodes crew went a bit more punk and designed and built many of their own components. That includes robotic mirrors, light drawing tools, synths, scenery, and even the laser modules. They hacked into existing DMX light fixtures, swapping mirrors for lamps. They constructed their own microcontroller solutions for controlling the laser diodes via Artnet and DMX.
And, oh yeah, they have their own visual programming framework, OceaNode, a kind of home-brewed solution for imagining banks of modulation as oscillators, a visual motion synth of sorts.
It’s in-progress, so this is not a Touch Designer rival so much as an interesting homebrew project, but you can toy around with the open source software. (Looks like you might need to do some work to get it to build on your OS of choice.)
Typically, too, visual teams work separately from music artists. But adding to the synesthesia you feel as a result, they coupled laser motion directly to sound, modding their own synth engine with Reaktor. (OceaNode sends control signal to Reaktor via the now-superior OSC implementation in the latter.)
They hacked that synth engine together from Santiago Vilanova’s PolyComb – a beautiful-sounding set of resonating tuned oscillators (didn’t know this one, now playing!):
… and the DIY OSC VST plug-in, to allow easy automation from a DAW (Reaper, in this case).
It’s really beautiful work. You have to notice that the artists making best use of laser tech – see also Robert Henke and Christopher Bauder here in Berlin – are writing some of their own code, in order to gain full control over how the laser behaves.
I think we’ll definitely want to follow this work as it evolves. And if you’re working in similar directions, let us know.
It’s a kind of love letter to humanity – and strikingly personal, both in the music and imagery, finding some hopeful part of the current zeitgeist. Don’t miss the new “Lovesong” video, made by music producer Max Cooper in collaboration with artist Kevin McGloughlin.
There’s a lot of bombast both in mainstage electronic music and in music videos – shouting being one way of attempting to get above the din of media in our world. But what’s engaging about “Lovesong” is that it does feel so intimate. There’s the unaffected, simple chord line, wandering around thirds like a curious child, slow fuzzy beats ambling in and out with just enough bits of sonic icing and flourish. And the video, composed of a stream of user-submitted faces, manages to make those faces seem to gaze back through the screen. It’s where we’ve come at last: the visual effects aren’t so gimmicky any more, but seem more natural first language. (Compare the fanfare with which Michael Jackson’s “Black or White” arrived – see below.)
That is, while this video is surely fodder for design blogs and … well, this one … I suspect it’ll spread passed by one person to another, more on the human level suggested by the video.
The visual work, by the way, comes from fellow Irishman and animator/filmmaker Kevin McGloughlin, a self-taught artist and director. Here’s what Max and Kevin have to say for themselves:
“My new album, one hundred billion sparks, is out today, so it seemed a good day to also launch the music video which you created. The whole thing is built from photos which were submitted by those of you who listen to my work, so many thanks for that, and I hope you can spot your face in there!
The topic itself was a difficult one to approach, as so much popular music is written about love that it seems to have become more of an exercise in sales than anything authentic. So it’s a topic I’ve always avoided, but one that came naturally during the process of creating this album about the mind and what creates us, especially with the arrival of a new tiny person at the end of the writing period.
I chatted to Kevin McGloughlin about how we could visualise the idea in a general sense, and we decided that imagery of the human face would be the way to do it. Kevin had the great idea of setting up a platform for the viewers of the video project to submit their own photos to build it from, so as to make the video a personalised, and more meaningful rendering of the love song. Then Kevin worked his magic with the photos creating a beautiful complex blending and processing of stills. Many thanks again to all of you who submitted your photos, and I recommend scanning through to find yourself in there and getting screenshot. It’s amazing how much is going on when you slow down the video to look at the individual frames too, hats off to the awesome Kevin McGloughlin once again!”
– Max Cooper
– – –
“I am completely honoured to have worked directly with so many people for this global portrait.
It was a great experience of collaboration and though some of the images are less prominent than others, each and every image was as instrumental and important as the next in creating the final piece.
When Max told me about his vision for ‘Love Song’, “love of the species”
I immediately had the idea to include real people and real moments in the video.
We asked for submissions and images flooded in from people all over the world, and work got under way.
This video is like a postcard for me. Something for all the people involved.
Big thanks to all the collaborators and to Memfies who aided us in the compilation of all the initial images.”
– Kevin McGloughlin
That idea of “love of the species” recalls for me one of my favorite texts, associated with a street corner in my hometown of Louisville, Kentucky. It comes from Catholic mystic Thomas Merton, but it’s universal enough that the Dalai Lama took it as a title when visiting the city. (And it’s partly about getting away from superficial religiosity.)
“In Louisville, at the corner of Fourth and Walnut, in the center of the shopping district, I was suddenly overwhelmed with the realization that I loved all those people, that they were mine and I theirs, that we could not be alien to one another even though we were total strangers. It was like waking from a dream of separateness, of spurious self-isolation in a special world, the world of renunciation and supposed holiness… This sense of liberation from an illusory difference was such a relief and such a joy to me that I almost laughed out loud… I have the immense joy of being man, a member of a race in which God Himself became incarnate. As if the sorrows and stupidities of the human condition could overwhelm me, now I realize what we all are. And if only everybody could realize this! But it cannot be explained. There is no way of telling people that they are all walking around shining like the sun.”
“Spurious self-isolation” is certainly an idea that music producers will find familiar, not only monks. But even though it’s not always easy, it’s great when we can find our way to wake from the dream of separateness and find love again – and we’re lucky to have music for those love songs.
Spotify has begun opening uploading not just to labels and distributors, but individual artists. And the implications of that could be massive, if the service is expanded – or if rivals follow suit.
On reflection, it’s surprising this didn’t happen sooner.
Among major streaming players, currently only SoundCloud lets individual artists upload music directly. Everyone else requires intermediaries, whether that’s labels or distributors. The absurdity of this system is that services like TuneCore have profited off streaming growth. In theory, that might have meant that music selections were more “curated” and less junk showed up online. In reality, though, massive amounts of music get dumped on all the streaming services, funneling money from artists and labels into the coffers of third-party services. That arrangement surely makes no sense for the likes of Spotify, Apple, Google, and others as they look to maximize revenue.
You’ll upload via a new Web-based upload tool. Check the tool and FAQ.
It’s invite-only for now. A “small group of artists” has access for testing and feedback, Spotify says.
It won’t cost anything, and access to releases will be streamlined. No fees, the full commission – the deal is better financially. And you’ll be able to edit releases and delete music, which can be a draconian process now through distributors.
Regions are a big question. The tax section currently refers to the W9 – a tax form in use in the USA. So clearly the initial test is US-only; we’ll see what the plans are for other regions.
You have to look into the future before this really starts to matter, because it is so limited. But it could be a sign of things to come. And bottom line, Spotify can give you a better experience of what your music will be like on Spotify than anyone else can:
You’ll be able to deliver music straight to Spotify and plan for the perfect release day. You’ll see a preview of exactly how things will appear to listeners before you hit submit. And even after your music goes live, you’ll be in full control of your metadata with simple and quick edits.
Just like releasing through any other partner, you’ll get paid when fans stream your music on Spotify. Your recording royalties will hit your bank account automatically each month, and you’ll see a clear report of how much your streams are earning right next to the other insights you already get from Spotify for Artists. Uploading is free to all artists, and Spotify doesn’t charge you any fees or commissions no matter how frequently you release music.
The question really is how far they’ll expand, and how quickly. If they use all of Spotify for Artists, as their blog news item would seem to imply, then some 200,000 or so verified artist accounts will get the feature. (I’m one of those accounts.) 200,000 artists with direct access to Spotify could change the game for everyone.
The potential losers here are clear. First, there are the distributors. So-called “digital distribution” at this point really amounts to nothing of the sort. While these third parties will get your music out to countless streaming services, for most artists and labels, only the big ones like iTunes and Spotify count to most of their customers. At the entry level, these services often carry hefty ongoing subscription fees while providing little service other than submitting your music. More personalized distributors, meanwhile, often require locking in multi-year contracts. (I, uh, speak from experience on both those counts. It’s awful.)
Even the word “distributor” barely makes sense in the current digital context. Unlike a big stack of vinyl, nothing is actually really getting distributed. More complete management and monetization platforms actually do make sense – plus tools to deal with the morass of social media. Paying a toll to a complicated website to upload music for you? That defies reason.
The second potential loser that comes to mind is obviously SoundCloud. Once beloved by independent producers and labels, that service hasn’t delivered much on its promise of new features for its creators. (Most recently, they unveiled a weekly playlist that seems cloned from Spotify’s feature.) And SoundCloud’s ongoing popularity with users was dependent of having music that couldn’t be found elsewhere. If artists can upload directly to Spotify, well … uh, game over, SoundCloud. (Yeah, you still might want to upload embeddable players and previews but other services could do that better.)
Just keep in mind: Spotify for Artists was 200,000 users at the beginning of summer. At least as of 2014, SoundCloud was creating 10 million creators. So it’s not so much SoundCloud losing as it is another sign that SoundCloud won’t really take on Spotify – just as Spotify (even with this functionality) really doesn’t even attempt to take on SoundCloud. They’re different animals, and it’s frustrating that SoundCloud hasn’t done more to focus on that difference.
But all this still remains to be seen in action – it’s just a beta.
Just remember how this played out the first time. Spotify reached a critical mass of streaming, and Apple followed. If Spotify really are doing uploads, it’d make sense for Apple to do the same. After all, Apple makes the hardware (MacBook Pro, iPad) and software (GarageBand, Logic Pro X) a lot of musicians are using. And they tempted to capitalize on their strong relationships with artists once, with the poorly designed Connect features (touted by Trent Reznor, no less). They just lag Spotify in this area – with the beta Apple Music for Artists and Apple Music Toolbox.
Meanwhile, I wouldn’t write off labels or genre-specific stores just yet. If you’re making music in a genre for a more specific audience, dumping your music on Spotify where it’s lost in the long tail is probably exactly what you don’t want to do. Streaming money from the big consumer services just isn’t reaching lesser known artists the way it is the majors and big acts. So I suspect that perversely, the upload feature could lead to an even closer relationship between, say, electronic label producers and labels and services tailored to their needs, like Beatport. (We’re waiting on Beatport’s own subscription offerings soon.)
But does this make sense? It sure does for the streaming service. Giving the actual content makers the tools to upload and control tags and other data should actually reduce labor costs for streaming services, entice more of the people making music, and build catalogs.
And what about you as a music maker? Uh, well… strap in, and we find out.