John Bowen’s Solaris synthesizer is an amazingly flexible machine. While most playable keyboard synthesizers have a fixed (and simple) signal path derived from old analog synths, the Solaris is a semi-modular design. Like a real modular synthesizer, you can patch its various components together in flexible ways. It is not as infinitely flexible as a modular, but for practical purposes, there is little you can’t do.
The key to this flexibility are the four mixers. While the synth has “hard-wiring” from OSC-Mixer-Filter-VCA by default for each of the four voices, this is not a path you have to follow. The mixers are each capable of taking sound from four inputs. These inputs can be anything that makes or processes audio in the synth from oscillators, to rotors, to filters, to insert FX, to the VCA’s. This means that you can reconfigure the signal path in almost any way, and blend any combination of feedback or sequential processing you can think of. Each mixer input (and the master out) can be modulated by any control parameter of the system, enabling full control of any of these audio signals.
This flexibility is dizzying at first. Every section of the Solaris is full of options, and then you can combine them in almost any way. How do you learn to use this beast? Well, one easy way is to arrange the signal path to mimic a successful historic analog synth. The Solaris can easily create the signal path of almost any synth you either owned or always wanted to own. This is easy to believe if we pick something simple like a MiniMoog, or a Jupiter 6, but the reality is that the Solaris has resources to be far more ambitious!
Let’s consider the legendary Yamaha CS-80 polysynth, which is perhaps the most desirable analog polysynth ever made. Functional used copies sell for upwards of $20,000 if you can find one at all. Maintenance is not cheap or easy. The synth provided 8 notes of polyphony and full polyphonic aftertouch. The Solaris can mimic the architecture and provide 10 voice polyphony, with polyAT (though you might need a Roli Seaboard to get the PolyAT going…). Here’s the voice architecture of a CS-80 – Yamaha was kind enough to print it on the top of the instrument.
There are two identical voices that can be programmed, so in effect there are two of this signal chain to represent. The Solaris does not perfectly re-create this, but it can get very close (it even includes ring-modulation in the Amplitude Modulation section!). Of course, it’s modulation matrix is vastly more capable than the CS-80, so it is actually far more capable than the CS-80 in most respects. Let’s see how we can lay this out in the Solaris, using the mixers to bring the Solaris components into the right order.
This layout recreates the modulation control present in the CS-80. If you want access to the Ring Modulator in the Solaris, it is available in either Mixer. The Ring Modulator is in the AM1/AM2 section of the synth (hit the MORE button on the global screen). You then access the audio by selecting AM1 or AM2 as the input to one of the four mixer channels. In most patches, I’d replace the Noise input with the AM1/AM2 input, but this is clearly flexible per voice and per patch. I highly recommend adding a bit of the Ring Modulation – maybe 5% strength, modulated by the wheel, and then mixed in at low volume, maybe 3-5 in the Mixers. It really adds a nice subtle harmonic complexity that changes wonderfully with the wheel.
The Solaris has near infinite modulation possibilities beyond this, but this patching arrangement will allow the exploration of the core CS-80 sound palate. What I did is set this up, and then I saved it as a patch. I don’t edit this patch, but use it as a template for other patches.
Clearly, this same approach can be taken toward other classic synths. If we can model a CS-80, Jupiter8’s and many others are all possible. In fact, this is a great way to learn the Solaris. Every component of the Solaris is more capable than the CS-80. The oscillators are capable of digital wavetables, the filters have many different types compared to 12/24db HPF/LPF, the modulation matrix and control inputs are many times larger. But the synth voice architecture is proven and will produce playable sounds at every turn. The wonderful thing about doing this exercise on a Solaris is that you can start with a known, proven architecture, and then when you want more control, use the extra facilities or the Bowen to modulate the attack portion of the envelope with velocity, or use the rotors, etc.
I’ve never owned or played a real CS-80, and may never get to. This post is not arguing that the Solaris is going to produce so authentic a sound that CS-80 ownership is rendered moot. Instruments have a gestalt – including the Solaris – that isn’t going to be recreated on something else. What this post does suggest is exploring the sound space that instrument was capable of making, and then using that as a point of departure to do things that the CS-80 was never capable of. By replicating, and then exceeding the architecture of previous synths, it is possible to creatively explore new territory.
This is not a post with a lot of my own thought attached. I referenced George Howard’s work earlier this week. This content was originally published on Forbes, I believe. This is some of the best thought that I’ve seen printed about how a future can exist outside the current mega-label hegemony. Those of us who are not selling – and may never sell – millions of records are not well served by the existing royalty, tracking, payment and rights administration schemes. What is proposed here would be transformative for composers, song-writers, indie labels, and just about anyone who makes things that exist in digital form. I don’t know how close or far this is, but the ideas are important and worth discussion and advocating for.
Without further ado, here are links to the articles. Personally, I wish George Howard & Imogen Heap success in what they are championing. It is refreshing to see new thought on these subjects that is not just echo-chamber material.
This is not a new interview, but I just came across it and found it to be insightful and challenging. You can enjoy a transcript here and then watch the video:
The Axoloti Indigogo project is extremely interesting to any synthesizer-oriented musician. Details here:
There are software modular synths – Native Instruments Reaktor comes immediately to mind, for example, and is extremely deep and powerful. Arguably one could do almost anything in it. This project is interesting because once you make a patch, the patch is compiled and loaded into the onboard memory of the circuit board. The unit is then a stand-alone hardware synth that plays that sound – no computer, no Mainstage, no VSTs or AUs. It is about 10 voice polyphonic on a 3 oscillator patch, which is really quite good.
As an open source project, all the source code is available and so, anyone can modify, add, etc. as interested.
Some will use this as a guitar effects box, or a delay unit, etc. I will use it as a small stand-alone synth voice that adds polyphony without burdening any of my computers or other synths. I can build a signature sound into it, and then that sound is always available! I suspect that some kind of patch storage is coming, but even if not, the rich data from the Seaboard should be able to drive a very expressive lead sound or something cool.
The ability to have a truly modular and flexibly patched synthesizer for a mere 60 Euro is a great deal, and I’m looking forward to getting mine. The campaign is fully funded, so if you are interested, place an other and get one of these while they are available.
My year is going to be spent trying to actualize a project I’ve been working on for some time, a project I refer to as “the DSO”. The DSO is an ongoing project to explore what happens if you take the control surface of a pipe-organ and mate it with a raft of synthesizers. When expressed this simply, the idea is certainly nothing new.
So, while there is little I claim as unique in terms of the concept, the execution will certainly have a personal touch. The idea is appealing to me on several fronts.
1) The pipe organ has the most evolved control surface of any keyboard instrument. Pipe organs have been around a long time – over 900 years. For most of their life, they were one of the great mechanical feats of mankind, and among the most complex machines built. The system of pistons, stops, divisions, couplers, etc. provide the most developed control paradigm for selecting and combining sounds. It seems obvious to me that this control surface design would work with a raft of synthesizers as the tone generating facility. It also allows for full use of all of one’s limbs, much like the drums, and unlike almost every other instrument.
2) The organs tonal design is based on the harmonic series that underlies all music. Artful combination and manipulation of the harmonic partials is where timbre comes from in all instruments, not just pipe organs where it is clearly marked on the stop tab. Synthesizers can manipulate harmonics easily, and so, it seems useful to think about combining the two ideas. I also like the idea of “voicing” my synth patches into a unified instrument. Different pipe organ builders had a “sound” that they were known for. Me being an individual with a peculiar taste, I ought to be able to have a “sound” that is mine – that can go from big to small, high to low, and encompass many moods and emotions. I want it all playable from one console in continuous fashion.
3) Organs are massively polyphonic – holding down a single key can sound dozens of pipes depending on what stops, couplers, and combinations are drawn. Modern multi-core computers like a MacPro routinely deliver 1400-2000 voices of polyphony in orchestral sample-based composing setups. Hardware synths like my Bowen Solaris or the Dave Smith Instruments Prophet-12 can deliver about 48 oscillators (voices). It is just now possible to combine these tools into setups that can support the polyphony demands of the “King of Instruments”, so it seems intuitive to me that it SHOULD be done. If two to three computers can support a full virtual orchestra, one to three of the same computers should be able to offer a number of digital oscillators that matches the pipe count of a traditional pipe instrument. In my version of the game, it is all about oscillators. How good are they, what can they do harmonically, and how many can I run? The initial build this year will have 48 or so “ranks” – each an instance of a synthesizer or a patch to an external synth like the Solaris. 10 stops pulled x 5 notes played x 3 (super and sub octave couplers engaged) x 2-4 osc/voice + oscillators for notes in the release tail, and the polyphony for even one manual will be over 500 voices/oscillators! If another division is coupled, this could easily double or triple.
4) If, in a takeoff from Frank Zappa, the pipe organ is not dead, it certainly smells funny. Pipe organs are locked up in institutions, all but unavailable to play, and expensive enough in cost and space that the best one can have at home is a virtual instrument, like Hauptwerk. The core repetoire for the instrument is mostly 300-400 years old (except for the 150 year old French stuff). The organ needs innovation. Sampling ala Hauptwerk will preserve the sound and traditional control surface. But it needs to evolve. The organ needs to be portable. It needs to be more expressive. It can easily have a broader sound palette. I love the sound, the majesty, and the gut-shaking presence of the instrument. I love good reed stops, trompette en chamade, and a great contra-bassoon stop as well as the next organ nut. But more is possible in 2015.
5) While Bach’s children and contemporaries may have found his musical style borderline dated, he was at the forefront of musical technology in his day and regularly consulted with organ builders, played at commissionings, etc. In Northern Germany, he was the “Jordan Rudess” of today. I firmly believe that if he was alive today, he would be keenly interested in all the new controller developments, synths, etc. I know it is not really true, but don’t all musicians want to explore sound, timbre, and have more expressive control? I do – I want to have direct access and full control over the building blocks of sound. I think that the intersection of synths and the new polyphonic multi-dimensional control surfaces offer this possibility for the first time in history.
So, taken together, I am distilling a hybrid instrument from the rich assortment of tools and software commercially available and modifying to suit my purpose. Two years ago, I built a MIDI footboard with substantial help from my father-in-law. The playing surface is rescued from an old Theater Organ built in 1918 that was in pieces. The red shell is my design. The foot pistons and expression shoes are all from real pipe-organ supply companies and are designed for heavy institutional use over decades of performance. The main contact edges with the floor, etc are all covered with aluminum corners. I tried as much as possible to make it road-worthy and durable. It is not light – probably 80-100lbs.
Inside the footboard, the electronics are sourced from Roman Sowa at his site. I pulled everything together inside the back, added a power supply and MIDI interface jack, and I have a 30-note concave and radiating pedal board with 20 toe-pistons and three expression pedals – all sending MIDI.
The camera was not square for the picture, so you’ll have to take my word for it that the unit is actually square and not asymmetrical! It is constructed of poplar, and is quite sturdy. I mocked it all up in Sketchup before construction to make sure it would work, and it was accurate to 1/32nd of an inch! We actually used Sketchup to confirm all the measurements and angles, which felt like a win for planning at the time.
I had experimented with putting the VAX-77 and a cheap 49 note controller over it, running some sounds in Omnisphere’s “Live” mode, but it never really caught on with me – the vision was not to play piano on an organ console, and I need MUCH more polyphony, a real stop system, etc. My hopes for the Seaboard, however are turning out quite nicely. I have started assembling the DSO with my Seaboard and am thrilled with the results.
Visually, the Seaboard is a thin, elegant line (not a chunky hardwood console built into a room). Touch-wise, the Seaboard all but demands an articulate legato touch that meshes well with the idea of organ playing. But, crucially, the Seaboard allows much more than that and offers continuous pressure and pitch sensing across all 10 fingers. This is a radical improvement on organ manuals. I like practicing and playing on this setup, and I have only just begun working with it set up and playable this week!
The bench in this photo is last year’s project. The seating surface is a gorgeous 2″ slab of Wenge (an African exotic hardwood) inlaid with thin strips of maple by my father-in-law. The stainless steel leg system was custom-made by a local metal shop to my specifications. The shop owner and my father-in-law conspired to deliver the finished bench on my birthday last year! That was a special surprise! The bench has a “preset” for exactly the height I need the bench at, and then is adjustable in 1/4″ increments with fine adjusters on the feet. Like the pedal-board it is solid, substantial, and functional.
The Seaboard and footboard send MIDI back to the MacPro in the road-case. That unit gives me 12RU of rackspace, plus the MacPro. The MIDI goes into a growing Max patch that routes the data to a stop system I’m building inside Max. Each of the toe pistons sends out MIDI CC data (0 or 127), and is easily mapped. The pedal shoes send CC1, CC2, or CC3 (0-127) and can easily be re-mapped to anything inside Max. I will be hosting all the synthesizers in Vienna Ensemble Pro because it is scalable, efficient, and light-weight. I know this supports 1400-1600 voices easily in an orchestral context, so it is a good place to start.
I’ll go into more detail on the MIDI data flow in further posts as I refine the stop system, build couplers, etc. The crowning bit of Max code will be a proper crescendo pedal!
As the year progresses, I hope to add a second manual in the form of another Seaboard (most visually integrated) or a grid-controller like the LInnStrument or the Soundplane (most expressive potential). I think one of those could make a wonderful “Solo” division. The stop system for the Seaboard is in progress, and I should have more on that in a week or so. The other big piece will be a proper stand to fully support the Seaboard and provide a stable foundation for vibrato and pitch slides. This old A-frame stand flexes disconcertingly in that regard.
I will also discover what I need to do to support the polyphony needs of the instrument. I am hoping that I can do it all in the MacPro (8 cores/64GB of RAM), but if not, I can easily spread the load onto additional systems. That will not be hard, technically. The larger challenge will be voicing the instrument and having the various stops blend and contrast appropriately across the 9 octave range of the instrument. I expect to make liberal use of Alchemy and the Galbanum Architecture waveform set. The combination allows me to directly control harmonic partials at the oscillator level, which I will cover in more detail later.
This year will be a very busy one for me at work, and so I have needed to consider my goals carefully so as to make them achievable in the mix of all else that must occur. This year distills into two main ideas:
- Make the Digital Synthesis Organ (DSO) fully complete and playable
- Perform a recital/concert of music on it
For the first, I have been busy at working developing the console functionality in Max, and have the footboard logic already largely complete. I suspect that I will have all the couplers and even a crescendo pedal accomplished along with the stop system by the end of February.
To complete the DSO, I need a second manual, a bunch of synthesizer programming, and a proper stand. The second manual could be another Seaboard, but will likely be a grid controller like the LinnStrument or Soundplane. I want to learn a grid controller, and the extra dimension of control would be perfect for solos done on top of what ever I am doing on the pedals and Seaboard. The synthesizer programming will largely be done in Alchemy, the Solaris, Bazille, and Omnisphere, with some cameo appearances from Sculpture and “The Prism” synth that comes with Reaktor. After the basic “ranks” are built, I will find useful combinations and program them into my console for performance. The A-frame stand I am using from an old keyboard rig is a start, but cannot be the end. It has no side-to-side stability (piano playing is only vertical force), and so it shakes substantially during vibrato moves on the Seaboard. The Haken Continuum stand is providing inspiration at the moment, though I have a number of other considerations that will need to be addressed to provide room for the 30-note organ footboard I built. Thankfully, the gentlemen that built my custom organ bench is a master metalworker and knows how to translate “artist desire” into something functional and aesthetically beautiful.
Performing on the DSO is a bit more involved, since it will be a “first concert”. There are the necessary musical preparations, and also having a proper sound system to play through, a venue, and an audience. All can be sorted. Musically, I am going to prepare a selection of Christmas hymns and songs for a sing-along. I will use the DSO and Ableton Live & the Push to assist. The base, “guaranteed-to-work” case is that I do this in my studio, where I have everything set up, and a very solid monitoring system that is flat to 27Hz, and usable to 20Hz with well over 100dbSPL at my disposal (more than is necessary). Other possibilities include my church, or a nursing home, a holiday set at a local mall, etc. For this, I would want to add some PA equipment to my very capable Danley TH-112 subwoofer, which is just a delight. To reproduce the full range synthesizers can produce at useful volumes requires a solid PA if I am not at a typical performing arts venue.
So, that is the broad plan. The blog will no doubt follow this journey as i work through the myriad details that stand between making an instrument, learning how to play it, and organizing “a performance”. It will be an opportunity for me to take what I am doing full-circle and present it to the world.
I have known about Cycling’74’s Max language for some time, but avoided it. While I’ve done scripting in Ruby and Perl, I’ve always had a “no programming near my music” attitude. Maybe because my day job is so technical, I’ve wanted the music separate, I don’t know. It wasn’t a closely examined position to be sure.
At some point on a special, I upgraded Ableton license to have Max for Live, and I installed some of the pre-done modules, but never really did much with it.
As I’ve been playing with the Seaboard, I realized that most music applications are geared to consider an input as “one MIDI channel”, and to steer data in that way. Being of a bent towards pipe organs, both Hauptwerk and jOrgan work this way. A “manual” is a MIDI keyboard set to a particular channel. Clearly this was not going to work for me.
I have also used Apple’s MainStage for several years for a live rig that I play at church. This hosts all my soft synths, allows for easy set creation and quite flexible signal routing. It is not the most efficient application in the world, however, and its “one CPU core per channel strip” is often not what is needed or wanted on a modern quad-core MacBookPro. Particularly with “Poly-Thru” patches where the Roli app is hosting 5-10 plugins, this just doesn’t scale well.
I remembered that I had PD extended on my laptop and took a quick look at it. It is written by the same gentlemen that wrote Max originally, and works in much the same way. There are plenty of tools for MIDI work, but there still seemed to be a desire to filter things on output to a single MIDI channel. I suspect that could be gotten around, but it was clear that PD is focused on different applications primarily, so I went to Max.
I knew from previous explorations that Max for Live would not work because it is designed to operate inside an Ableton channel. I want my organ to be able to access 50-75 “stops” – each of which may be a soft-synth, sampler, or external MIDI connection to a real hardware synth like the Bowen Solaris. A single channel is not going to work, so I downloaded the full version of Max.
Within minutes I had the Seaboard MIDI input routed to a virtual MIDI port and on to an Absynth instance hosted in Vienna Ensemble Pro (VEP). This worked brilliantly and CPU usage was negligible. I wasn’t asking MAX to do much, and it responded by pretty much ignoring CPU. Fast, intuitive, and easy.
VEP is the heart of most media composers studios. It is the best way to hook multiple computers together to freely interchange MIDI and audio across a Gigabit Ethernet network. I have a small D-Link switch in my office just for this purpose. The VEP traffic runs across a separate VLAN and connects three computers. It just works, and it works at scale. On my Mac Pro, I regularly host 58-63GB of samples spread across 9 instances of VEP and 30+ Kontakt instances. I know it performs, and it is well-known for handling multi-processing issues well on large templates. My main orchestral template in Cubase is ~640 tracks – almost all of which are hosted through VEP instances on two slave computers.
Looking at the built-in MIDI processing functions, I quickly surmised that I could do exactly what I wanted to inside Max. I could build handling and logic for all the console functions of my organ project, and easily route all the channels from the Seaboard to any output. I would use VEP to host the synths. Max is easy and powerful. I had intended to ultimately learn enough to control DMX lights at some point, so that cemented it – Max would be my signal routing and console creation tool for the Digital Synthesis Organ.
It didn’t take long playing with it before a number of other possibilities for rig control and even composing tools started to take form in my head. It is really true that once you can program the world seems like a much more elastic place. I told my father-in-law, who is a master woodworker, that I was starting to “make jigs for my shop” and could now make my own audio and video tools. He remarked that I have the musical equivalent of a Bridgeport metal milling machine – the only tool that can make itself.
Two year ago, I wrote a short piece on preparing for the future, musically. In it, I hinted that controllers would be changing the face of music making, and that has become increasingly apparent. DJ booths are migrating from turntables and mixers to fully programmable electronic control surfaces. Innovators have brought us stick-on sensors for piano keys (Touch-Keys), and the next wave of control surfaces is coming to market in volume. I still believe this is going to “change the world” for creative musicians. Most acoustic instruments have been continuously refined for hundreds of years, but with very, very small changes. Put simply, they work! They are well-adapted to human physiology, acoustic sound production, and tonal shaping.
Playable synthesizers are barely a few decades old, and polyphonic synths with stable pitch (read digitally controlled synths – whether analog or plugins) are even newer. So far, synthesizers have been available through traditional keyboard interfaces or MIDIfied versions of guitar or wind controllers. Most of these controllers are cheap plastic and offer poor tactile feedback and control for nuanced performance. No one has produced a commercial string-based controller for violin family instruments that I am aware of. Two years ago, my first foray into the edge of this world was the Infinite Response VAX-77 – a fairly traditional piano keyboard controller with the addition of polyphonic aftertouch and high-resolution velocity sensing. It was not a very big leap compared to a Haken Continuum.
In the time that has passed, a number of new instruments have been produced. Roger Linn calls this movement Polyphonic Multi-Dimensional Controllers or PMC’s for short. Check out the list on his site and read up on the different approaches inventors are taking. So far, pitch layouts are either based on traditional keyboard layout, or on a grid that more approximates the neck of a guitar or a violin. With all the parameters that make a sound “alive” under one’s fingers, it remains to be seen if there is a need for moving a bow or blowing in a tube to control synthesizer sounds. Observationally, few Eigenharp players seem to use the breath interface – the 3-D grid surface is expressive enough. Fingers are more controllable than a diaphragm from a muscular perspective, so I suspect bow and breath will remain primarily tied to acoustic instruments.
Readers will know that I just received one of these controllers, the Roli Seaboard Grand Stage. It is a powerful and expressive instrument, and the transition from one dimension of control to two is transformative. The LinnStrument, Madrona Soundplane, Eigenharp, and Haken Continuum all offer three dimensions of control. Interestingly, three of them are fully grid-based, and the Continuum uses its own linear equal spacing of semitones on its surface. None are traditional keyboard instruments. Even the Seaboard is quite different from a piano or an organ and has to be approached differently.
I have watched the Eigenharp for years with interest, but been unwilling to approach it on its own terms, fearing that it would be “starting all over” from an ability to play perspective. My thinking on this front is changing because the grid-oriented controllers do have real advantages, as Roger Linn and others point out. I think that in this regard, experimenting with the Seaboard and an Ableton Push has warmed me to the idea that I could play a grid-based controller successfully. Perhaps I am being weaned from an over-dependence on piano keyboards, but it appears that grid-based controllers will be a significant part of the musical world moving forward, and I want to be able to engage freely as they develop and mature. I suspect that technique and pitch recognition could be reasonably transferable between grid controllers to the degree that different grid manufacturers can present the same pitch organization on their grids. So I am now at least at the place that I think it would be good for me to learn a grid-based instrument.
The LinnStrument and the SoundPlane are both much more affordable than the Seaboard, Eigenharp, and Continuum. This will revolutionize this space, I believe. At $1,500-$1,900 they are priced in the range of entry-level professional or advanced student instruments and within reach for most in the “first world” with a bit of economic sacrifice. As sensor technology and computing cycles fall in price, we will ultimately reach an equilibrium where the finishing materials and touch surfaces will drive the cost the most. Premium instruments won’t have better sensors, but will feel nicer and come with better cosmetics. For example, the Roli Seaboard feels fantastic. Anyone who sits at it keeps running their fingers over it, even if not playing a note – it feels interesting and inviting as a physical object. The Madrona instrument is made of beautiful hardwood – like a traditional instrument. I am sure that, like the Seaboard, it feels solid, substantial, and carved out a block of material – not flimsy or cheap. I expect that as the physical interface standardizes, we will see the market stratify into different cost points based around the luxury and ergonomics of the physical controller.
The other factor that will be explosive, is the ready availability of powerful software and community development options. I am learning Cycling’74’s Max right now to serve as the signal router for my digital organ project. It is easy, powerful, and VERY light on CPU. I am already empowered to know that if I can imagine it, I can build it using the tools in front of me. Max is MUCH easier than Ruby or Python, which are themselves not that hard. The LinnStrument and SoundPlane both provide their source code as Open-Source-Software. This means that ANYONE can modify the code, come up with new uses, or extend it. If they stop making the instrument, users can keep the instruments running for years, just like classic cars. The internet, owner’s forums, and web technology ensure that this will be a rapidly evolving space where even the instrument inventors will be surprised at what happens and how rapidly it occurs as musicians customize their tools. It will not take hundreds of years for this space to evolve and mature, and that is exciting.
As these instruments get beyond a few hundred “early adopters”, and deployed by the thousands, whole new expressions will come out of it. What is hard to play on a piano or a guitar may be easy on one of these, and new possibilities will emerge.
I believe that synthesizer programming will also rise to new levels. We will be able to have patches that rely less on programming to generate movement, and be able to focus more on the harmonic and timbral content of our work and how those map to a surface that we personally control. I doubt it has ever been more possible in history for a musician to make “exactly” the sound that they want to hear for a given musical context. I know that all of the main synths that I use from the Bowen Solaris, through Alchemy, Bazille, and Omnisphere are all capable of multi-timbral input. Native Instruments is lagging, but Roli’s PolyThru software makes this transparent. The sound generation engines are ready, the control surfaces are emerging, and musicians are as anxious as ever to explore the world of sound and organize it to their taste.
So, I ended 2014 by welcoming one of these controllers into my studio. Will 2015 also see the addition of a grid-based controller? We will see, but I did sign up for the mailing list for LinnStrument availability – I like the clear LED feedback on pitch organization – great for dark stages, and the automatic pitch centering, which is very clever. It will be a good year for electronic musicians and performers, and it may well be the first year that there is broad availability of polyphonic multidimensional controllers. Will you be participating?
The Roli Seaboard joins a number of powerful new controllers that are seeking to re-imagine how performers interact with keyboard instruments. The pianoforte offered keyboard players the first opportunity to control the attack and volume of a note. New controllers like the Seaboard, Infinite Response VAX77, Haken Continuum, Madrona SoundPlane, and the new Roger Linn LinnStrument all offer varying degrees of continuous control over notes. As we have noted, this raises these new instruments in expressive potential to the equal of any acoustic instrument.
All this expression is possible because of advances in sensor technology and the ready availability and low-cost of powerful processors. The quad-core processors inside a current generation “smart phone” are more powerful than desktop processors of 10-15 years ago and people were doing multitrack audio work then. The result is that sensors for modern cameras, and musical instruments can output rich (14-16 bit samples thousands of times a second. When read continuously, this is a LOT of data relative to the simple “Note-On” and “Note-Off” messages of the MIDI spec that we’ve had since the early 1980s.
My VAX77 is closer to the traditional model than the Seaboard. The VAX77 sends high-resolution velocity information using the MIDI extensions designed to support that data, as well as polyphonic aftertouch as a continuous stream. It sends on a single MIDI “global channel”, much the same as any synth or controller. Programs like Modaart’s Pianoteq can use the high-resolution velocity to provide a wonderful playing experience. Other software ignores the high-resolution extensions and just sees the traditional 8-bit velocity. The aftertouch data is all on one channel, as you would expect, since aftertouch is a note property. So, the VAX77 integrates easily into any studio or stage environment.
The Seaboard also sends velocity and continuous polyphonic aftertouch, but additionally sends continuous pitch-bend information per-note. The challenge is that MIDI pitch-bend messages are not note messages, they are channel messages, meaning that they affect every note playing on that MIDI channel. This will not work if you want to play a chord and put vibrato on the top melody note – the pitch bend would affect all the notes. The good folks at Roli worked around this by having the Seaboard send data on up to ten MIDI channels at once. Ten fingers, ten channels – each has full pitch and aftertouch sensitivity continuously after the note has begun.
The Roli is a thin instrument – just over 1″ tall, and so does not have 5-pin DIN MIDI ports. Data comes from the surface via a USB port and interfaces with their Seaboard Grand application.
This, in turn, creates a MIDI interface in OSX’s AudioMidi Setup utility. The Seaboard shows up as a standard MIDI interface to anything inside the system. But it sends data on 10 channels, not one, so the virtual instrument must be capable of receiving on all channels (an “omni” mode as ROMplers would call it). Roli has an interesting PolyThru application that hosts multiple copies of non-multitimbral plugins and then routes each finger to a unique copy of the plugin – clever. So, there are tools available to make almost anything work, one just has to deal with the reality of a much wider data stream – 10 channels – to fully maximize the surface.
In the next posts, I’ll explore how I’m routing this for my application.
After the first week with the Seaboard, I am quite happy with my purchase.
- The Seaboard is very high quality. The Seaboard’s exterior is carefully milled aluminum, all the surfaces are well finished, and there are very elegant details like the engraving at the back of the unit. Oh, and it only weighs 15 lbs! It is very easy to transport!
- The Seaboard fully delivers the promise of making a lot of expression available in an approachable package for keyboard players. Having the notes laid out in a familiar fashion, with excellent tactile feedback for navigation is perfect for me. It splits the difference between traditional controllers and unique instruments like the Haaken Continuum or Eigenharp, which inhabit their own space entirely. This middle ground feels perfect to me as a player.
- Roli has been very responsive to questions that I’ve had, and I have had quality interaction via Skype with their support team. They are working hard to make the Seaboard better and are clearly listening to their early users. They have been a pleasure to do business with from my first demo at Roger Linn’s house to post-sale support. A quality outfit.