If you have been following my blog, I have laid out a number of fairly obvious possibilities for the evolution of music-making. In order to fully take advantage of the possibilities, I believe there are two cross-company initiatives that would transform the marketplace for everyone:
1. Define a standard plug-in data model and API for parameter passing. Today every manufacture has their own data model and writes to specs like VST, AU, etc. This is great and works well, but all of these technologies involve local processing. Vienna has figured out how to remote VST and AU plugins, but that is just the beginning. What we need is a specification that allows processing to be outsourced to internet-based engines. Instrument models, incredibly rich reverb effects, and powerful synth engines that would overwhelm a desktop PC can be envisioned and created by existing instrument and effect designers. The problem is not the algorithms. The problem is the communications infrastructure and data model for passing the parameters. A group with representation from DAW manufacturers, plugin vendors, digital instrument makers (like Eigenharp), and sample vendors/modeling companies could produce a rich, extensible data model that could facilitate an on-line/offline render model and incorporate the kind of rich data than next generation musical controllers can produce in a post-MIDI world. The spec needs to be open, and not tied to any one vendor. The Internet Engineering Task Force and dozens of other high-tech standards bodies show how this works to benefit of all participants.
2. Create a standards-body work for virtual performance space. Like the plugin specification above, this would specify a Data model, interfaces, etc for a virtual concert space. As I lay out in my previous post on the future of music, there will be entirely digital performances in the future. What we need is a model that allows all the vendors to plug in seamlessly to a cloud-based concert hosting service. Everyone from mix engines, FX providers, lighting technicians, instrument models, etc need to collaborate to make a virtual concert happen in a virtual space. Provision must be made to use rich controller and body-suit information to create 3D performers, and avatars that can actually play instruments on virtual stages. Even 30 minutes of reflection on the inputs and outputs for a virtual concert experience point directly to the need for a standards-body to coordinate industry input and define interfaces that dozens of vendors can use to meet customer demand. We don’t need proprietary digital islands – that will delay progress by 5-10 years. Existing enterprise technology adoption illustrates this – closed development is a dead-end, and technology will route around road-blocks.
Both of these efforts are multi-year efforts that will require a lot of work. As the CPU and RAM vendors push forward, the timeline for broad distribution of the requisite horsepower and consumer devices will roughly coincide. A 3-5 year timeframe for the development of this technology is reasonable and most MI companies would stand to benefit tremendously. Even DSP-based companies like AVID or UAD could easily port their algorithms and control surfaces to this world.
Organization of these bodies will create a rising tide that lifts all boats within the industry, and vastly expand what artists can deliver.
Orchestral mockups are presently a LOT of work – as much work as actually composing the piece. On the one hand, this is reasonable – one person is trying to simulate 50-100 players. It stands to reason that this will be hard, and fraught with complexity. As processor power increases, and instrument models evolve, we should expect this area to be transformed. Much of the variation that has to be hand-entered today, should be performed via a “rich-data” controller like an Eigenharp Alpha or some other string/wind variant geared toward those kinds of players. These controllers output exponentially more information than MIDI could ever handle, but will provide the kind of expression data that makes an analog instrument so rewarding. Think about the possibility of feeding a violin model parameters like: bow pressure, speed, angle, distance from the bridge, string friction coefficient, string bending force, etc. Once this rich data is in a suitable DAW, it will be possible to use algorithmic means to vary this data, run it through different models, and truly make whole sections of related, but individual performers. This has the potential to revolutionize orchestral mockups and completely eliminate manual sample switching.
It would be ideal if rich and very expensive instrument models were accessible in the cloud – in two part form – smaller models to write with, and crazy rich models to render with. The history of all samples is that they get cheaper over time, no matter how real they seem initially. In the future, one or a handful of highly skilled players in each section will play rich-data controllers in real-time. Sample switching is not necessary since the models record all string/bow movement, pressure, bow speed, pluck point, fretting point, breath parameters, etc. All data goes to the cloud as inputs to the model. Algorithms can create interpolated players using the rich data to fill out the sections and avoid repeating sounds identically; different instrument models can be mixed as well to further the ensemble. Incredibly expensive reverb can put every player into a common space. This reverb can effectively be almost infinitely expensive computationally because of the power available in large internet data centers. Cloud0-based CPU complexes can render the sound and output individual tracks for mixing in 24 bit surround-sound, or provide direct output as a simple stereo wave from modeled microphones in the modeled room.
Whoever figures this out, will have most of commercial music beating a path to the door. If someone like the VSL folk figure out how to take their samples, apply modeling over the top and come up with a fast data interface (think Eigenharp level data feeds) in cooperation with a digital instrument maker, they will be an incredible force to reckon with. The SampleModeling stuff is already amazing. Put this in the cloud where computation is limitless for final render, and the local composer’s responsibility is just to get rich data off a digital controller.
First, allow me to thank you for the incredibly rich capabilities you already deliver. Being able to run 30-50 tracks off my MacBookPro is most excellent! This letter, however, concerns the future and how you might re-invent the DAW to take advantage of next gen computing horsepower.
Today, all audio plugins have to operate in real-time. This means that the average laptop drives the quality level for the whole industry since that is what most people use. More computationally expensive algorithms exist, but are not compatible with real-time playback on a laptop. We need the ability to “render” our final audio, just like the video folk. Video is massively more than a desktop PC can handle, so all previews are of lower quality, but are fine for editing, color correction, etc. The final render, however, runs at full resolution, full frame rate, and full color bit depth. In the music world we don’t have this option. The DAW company that figures this out will have a massive advantage. If the software is smart, it can tell when a full version vs a draft version will do and automatically render in the background what is needed. Video companies do this, and some even use GPU cards to accelerate algorithsm – another viable strategy.
What we need is an online/offline model. I can use existing plugins to work – they are real-time and have great sound. But, I want my plugins to have access to algorithms so powerful they would halt my computer for days. This kind of power is available in the cloud, but I can’t send data to the cloud in real-time; there is too much latency. What I need is for my plugins to be able to stream audio to the cloud in the background, process it using very expensive algorithms and return my processed audio to the track as a “freeze” track. Even as good as a UAD or Protools card is, more quality can be had – particularly for reverb and highly involved synthesis engines.
You could leave this up to the plugin vendors and simply make a way for them to “play a track” in the background through their plugin (a great use for all the cores in a modern multi-core machine). Or, you could make a torrent server that centralizes these requests, schedules them and maximizes the communication, keeping track of which tracks are edited and need to be updated. Or, you could allow plugin vendors to make a “render” algorithm that runs on the final bounce/mix-down. Because this doesn’t have to be real-time, more expensive processing can be used. Plugin vendors can get their real-time tools close enough to the render tools for engineers to have confidence that it will be “the same, only better”.
If you do this, your plugin vendors will love you. All software synth makers should be requesting this from you. This is how they can keep their best code in the cloud and not deal with copy-protection, etc. it also opens up the idea of an online plugin marketplace. They can give lower quality versions away and charge for the renders. Surely you don’t want Propellerheads to own this by themselves (they already have an online plugin marketplace)? Your users will also love you. This will let me access “the high end stuff” on a project by project basis. If I am just writing demos for my band, I may not care, but if I am polishing our album, I would definitely spend extra to get the last bit of great sound via some cloud-based plugins that are so expensive they don’t even fit in a 5k Bricasti box or a $3k Access-Virus chassis.
Dear algorithm magicians,
You work on the most CPU and memory intensive effect in the business. A desktop PC, even a fast one, is massively compromised for real-time reverb processing. The trick is that many of us don’t need real-time processing all the time. Whoever builds the most expensive, best sounding algorithm in the cloud stands to win huge business. What most of us need is a solid “draft quality” algorithm for working, and then access to an offline-final-rendering plugin of the highest quality. As an orchestral composer, the best option today is something like a Bricasti box, TC6000, Lexicon, or Vienna’s “MIR” (which consumes much of single PC). I don’t need that to write, or even mix. I need it for the final output. Someone needs to make a plugin that handles this. Rendering is OK for video, and it is OK for audio too. Let me make a better than fantastic mix by running my PC all night, or sending data to the cloud. Here’s the feature list:
1. A reverb plugin I insert in my DAW needs to deliver a competent base reverb. Think Logic’s Sound Designer or something similar. Not exquisite ear candy, but credible and competent – give this away. It is the “shopping cart” for your big algorithms. You don’t have to worry about piracy because your big algorithm never lives on their computer.
2. The plug is torrent enabled behind the scenes to handle the up/download to the cloud. The cloud uses massive parallel processing to crunch things at least as well as Bricasti, but why not 100 Bricasti’s? Cheap and on-demand. Your business scales with demand, you only pay for what you use. It’s a cash business.
3. When I am ready to bounce, I enable the reverb “sends” to go to the cloud, and play my piece. The data goes through the plugins, up to the cloud and comes back as tracks with reverb.
4. Mix these in to taste. Freeze tracks can preserve the render
5. If the DAW companies can be involved, then someone can make it so that the track contents constantly stream in the background up to the cloud. Returned tracks are “frozen”. Usually as a mix comes together, many tracks are not changed. If edits are done, background sending gets the changes rendered. Audio tracks for a 3-10 min song are small bandwidth in a broadband world.
Done. Mega-expensive sound accomplished. Easy to use. $1 per instance per song. Why buy a $5k box, when I just want to polish one to a dozen tracks? Make it so everyone can afford the best algorithm and win big. Be known as the best room modeler and make interfaces to all platforms. Brand the room algorithms, not hardware. Past algorithms will only be interesting to date a sound. Buying dedicated hardware is great for live shows, and constant daily use, but there is a whole market of folk who would happily pay a few bucks to access “killer reverb” for a special project. These folk are NEVER going to spend $2-5k for a high-end box . The folks who want a dedicated box are still going to buy one. I want a Bricasti for the studio, would use it every day, and still pay to render out if something better was available there! You can’t audio process a live show in the cloud, so you will still sell expensive reverb boxes with DSP in them. Network latency is too great. This is about market expansion and the long, long tail of the whole internet.
Ultimately having the most expensive algorithm will win as CPU power becomes ever more free. In a few years, music made on cell phones will be uploaded to the cloud for processing. This is the early footprint – build the brand now for customers that will come online 5 years from now when cell phones are more powerful than current loaded Mac Pros and have better broadband than a cable modem today.
How to prepare for the future (Performer/Composer edition)
1. Get expressive control of all your fingers at musical tempos (clean 16th notes at 120-132bpm – this can be done on keyboards, guitars, new controllers like the Pico, etc.). This includes the ability to execute scales, arpeggios and other patterns in musically useful ways. Once the playing mechanism is freely independent and trained, it can easily adapt to alternate controllers with a bit of practice.
2. Get expressive control of your wind stream (pick a wind or brass instrument and learn it)
3. learn the basics of subtractive synthesis, and a DAW like Ableton Live LE ($99). Start with Figure for the iPhone at a cost of $1.
4. learn the basics of music theory and how to read notation.
5. write music constantly. Loop-based, through composed, it is all good – the future belongs to entertainers, virtuoso performers, and composer/song-writers who build multi-track environments for others to play in, and to accompany the events of life. Writing good themes, catchy hooks, and great beats will never go out of style.
6. Separate musical quality from instrumental texture. Great themes will be performed on lots of different virtual instruments. Some music (like a lot of Bach) will work equally well with very different instrumentation. Concentrating on themes and their development rather than particular instrumental expressions will help create the broadest future uses.
7. Experiment with expressive performance of virtual instruments. How much expression can you drive? How evocative can you make the sounds you use?
8. Traditionalists: relax. The violin, symphonies, orchestras, pianos, and even electric guitars aren’t going anywhere. Television didn’t kill radio. Virtuoso performers will always be in demand. Great musicians will always be appreciated, as will great instruments. I still want a Fazioli concert grand as well as an Eigenharp Alpha! Nanomaterials and composites will revolutionize traditional instruments in wonderful ways – they will sound better, project better, respond quicker, and be customizable to the player. (They will even be cheaper as on-demand manufacturing becomes common). Of course, there will always be a high-end of traditional makers, working by hand, with traditional materials. This isn’t a better/worse world. It is a both/and world where old and new will co-exist.
I just finished reading Peter Diamandis and Steven Kotler’s provocative book, Abundance. The core thesis of the book is that several exponential technologies are changing the way we live for the better, and will result in profound change in the years ahead. The exponential technologies are:
2. Networks and Sensors
3. Artificial Intelligence
5. Digital Manufacturing and Infinite Computing
In the book, they lay out dozens of examples of how these technologies are being deployed against poverty, water and sanitation problems, energy problems, etc. They lay out a convincing case that the world as a whole can experience abundance, which is certainly an enticing thought.
It does not take someone with a PhD from MIT like Peter Diamandis to realize that if you can align yourself with exponential forces, then amazing things will happen. This is relatively basic mathematics. It is immediately clear how these things apply if you are working as a geneticist, a software developer, a robotics engineer, etc. But it takes a bit more creativity to look at these trends and discover how they apply to the other areas of life. What these technologies enable in their primary sphere is only the beginning. The 2nd and 3rd order follow-on effects will dwarf the initial use cases. We already know what happens when computing becomes near free, but what will happen when education, health care, clean water, and energy become near free?
The rest of this essay explores how these forces could affect musical expression on a planet-wide basis.
The Present State of Musical Affairs
Let us first be clear that there has never been a better time to be a musician on the planet. Already instruments are cheaper, and more available than they ever have been. Digital technologies have already made pianos available for a few hundred dollars, and put a whole recording studio in the hands of anyone with a laptop and a few hundred dollars. High school kids can self fund an album recording, and promote it themselves. Music distribution is practically free on the Internet, and there are even teachers who teach over Skype. Or at least that is the truth in the First World, but it isn’t the whole story for the developing world.
Most of the world, (and even most musicians) do not use anything digital to make music. They use handmade drums, cheap guitars, wooden flutes – whatever is at hand. People WILL make music with whatever they have. But where money is plentiful, we still tend to use acoustic instruments: violins, pianos, guitars, and other “traditional instruments”. What we call “ethnic instruments” in the West are really highly developed traditional instruments, little different from our own. Even so-called “electric” guitars are not really digital devices – they are 100% analog. Where digital technology has been employed in keyboard, woodwind, and percussion instruments, it has largely been to emulate acoustic instruments, or provide a traditional control interface, so that traditionally trained musicians can more efficiently interact with music software. Of course, there has been a huge boom in electronic synthesis, but interestingly, relatively little innovation in instruments themselves.
In the main, this is easy to understand. The piano has been continuously developed for a few hundred years. Same with the violin family of instruments. Pipe organs are even older. Drums are older still – and the human voice the oldest of all. While human beings have an amazing array of shapes and sizes, the basic mechanics of the human body have resulted in several basic classes of instruments: things we blow into, things we bow, and things we hit or strum. Our existing instruments are VERY evolved versions of these ideas. The difference between a concert flute and a child’s recorder is in execution, not idea. Both are tuned pipes, excited by breath, with pitch change accomplished by holes that shorten the alter the apparent length of the tuned pipe. Of course, there are huge differences in materials, craftsmanship, valves vs. no valves, etc, but the basic control mechanisms are quite similar. The violin family of instruments is hugely evolved from its beginnings and modern makers are equaling any work done in the past. Pianos are considered a largely “solved” problem, though we will no doubt see carbon-fibre or composite soundboards that are better than wood at some point, with composite actions that need little regulation and are indistinguishable from their wooden counterparts in function. In the end, analog design is hard – and with hundreds of years into perfecting certain expressions, we have some pretty entrenched ideas of what a musical instrument is. Making an instrument more expressive than a cello is not likely to be a weekend project by a lone inventor. Making a portable polyphonic instrument as capable of musical expression as a 6-string guitar is a significant undertaking. Perhaps equally significantly, almost all our musical education is tied to those instruments, which may suit us less over time.
In the West, many have outsourced music making to their iPhone, even as schools have cut back on art education. It is now possible to get world-class music anytime, anywhere, without ever touching an instrument. So we have a population that LOVES music, and consumes it almost exclusively digitally, but an educational system for music that is tied to analog instruments and expressed through digital recordings. This is broken. The concepts needed to understand folk music, rock and roll, and even traditional orchestral music can be learned in a year or two by any junior high or high school student. Intellectually, the essential components of music theory are no harder than high school Geometry. The equivalent of one college semester of music theory is enough to understand most popular music. But, that is cold comfort compared to what is needed to learn to play an instrument. Learning an instrument is not a primarily “understanding” based kind of learning like we do in school. It is an experiential, neuromuscular training activity more like learning a martial art or gymnastics. Musical instruments are cheaper than they ever have been, but are still uncommon in the general population for one simple reason – they are hard to learn. Musical skill is even rarer, and is handed down in a verbal tradition from a single teacher to a student much as it was 200 years ago.
The Internet has sped this up a lot for popular music, and “slow-down” software has been a boon for budding shredders everywhere. But we haven’t seen the equivalent of a Khan Academy for music. It is very easy to type math answers into a computer. It is a lot harder to play your sax solo as homework into a computer. Part of the reason is the instruments themselves. While the output of the instruments can be digitized, the actual control inputs are analog and can’t be carried electronically. Instruments cannot be configured to the level of the beginner and then expanded as skill develops. Teachers need to inspect movement to see what is going on, but even they can’t tell what is actually happening inside a student’s body. Games and other cutting-edge learning technologies have never really been developed, I believe in part due to the interface issues. Where interface issues have been solved in games like “Guitar Hero”, the interfaces are too simplistic to be musically expressive. Given the millions of plastic “Guitar Hero” controllers sold, the person who can solve the digital instrument controller problem stands to make a fortune.
Most acoustic instruments make a LOT of sound. They are optimized for this purpose, since amplification is a very new technology in the history of music. When you need people in the back of the concert hall or tavern to hear you, then a fully resonant, alive instrument is exactly what you need. If you live like much of the world in small apartments in cities – your neighbors may not be as enamored with hours of trumpet practice every evening. Most acoustic instruments have an economic value and a physical presence that just couldn’t work on a subway or a bus. Acoustic instrument are conspicuous in use. They don’t have headphone jacks, and carrying microphones, audio interfaces, and laptops is way too cumbersome outside of professional or performance situations.
Put simply, the possibility of unleashing an entire planet to be actively musically creative is available in the next ten years. We can move from a world where most people passively consume music to one where anyone can enter into music making with a digital instrument that meets them at their level of skill. There are several ancillary facts that must be understood to place the opportunity in context:
1. Over half the planet has a cell phone. Even in Africa, there will be 125 million smart phones in use by 2015.
2. Every smartphone has enough power to run a basic multi-track recording and synthesis package. Download Propellerhead’s Figure app from the Apple Store to see this. An average laptop already has more power than most musicians can use. This will only accelerate. We should assume that it is essentially free to run a full “Omnisphere-grade-synthesizer”, and full DAW package on the average cell phone within 5 years.
2. Internet access is already, and will be ubiquitous, though it will not always be accessed by laptops (think cell phones and tablets). In the first world, it will be broad-band quality or better all the time.
3. Computing power and storage in the cloud is close to free, and supports “almost-free” business models.
4. 70% of the earth’s population will live in cities by 2050. Space and noise constraints exist now and will only grow.
5. Modeled instruments or hybrid modeled/sampled instruments will likely replace sample-only instruments. They are simply more expressive. My modeled piano has infinite half-pedaling – and is more expressive than my acoustic grand piano in that regard. Samples can be very high quality but are a chore to manipulate, and hard to use in real-time. Models can react in real-time. This is what performing musicians need – real time response and control.
What is needed is a digital instrument – one that Moore’s law and mass manufacturing makes cheaper and cheaper with several important characteristics:
Open hardware design specs. Part of the reason violins work so well is no one has a patent on them. Anyone can try to improve them. Digital technologies mature very, very quickly in an open environment. Parts are cheap and getting cheaper. Having patented hardware will just slow adoption. The goal should be to make the digital equivalent of an electric guitar – we have no shortage of manufacturers or differentiation of an essentially “finished” instrument. Key hardware concepts:
– at least 2.5 octave playable range (shiftable on the fly) – that’s enough for killer solos, more in artist grade instruments
– polyphonic capability in a portable package- it is one reason why guitars are so useful
– breath control sensitive to the full range of human respiratory power (current breath controllers are very inadequate in this regard)
– pressure-based expression as sensitive as a violin bow
– velocity-based expression as good or better than a piano or a hand drum
– pitch-based expression as good or better than bending strings on a guitar
– ability to use all fingers simultaneously
– allow for expressive movement while playing – it is part of what makes electric guitar playing so watchable
– re-mappable control surface (hide the “bad” notes, learning games)
– enough data rate for MIDI 2.0 transmission
– wireless connectivity to a cell phone (Bluetooth is ideal, but likely too b/w constrained)
– USB is acceptable in the mean-time (iPad’s have adapters)
– portability: think Eigenlabs Pico form factor
The hardware goal is a base sensor and data-rate specification that allows musician-grade expression to be in every Jansport backpack in a major city. The hardware is really just an interface to software and cloud functionality, and so will be able to respond appropriately to someone with little skill or a lot. Of course, professional musicians and entertainers will use larger, more capable, more expensive, and nicer models. It is really hard to beat the Eigenlabs controllers in this regard for a “key oriented” controller. There are many other alternate controllers, but the Eigenlabs are the best thought out in my opinion, and meet most of the criteria above. There is certainly room for an equally well thought out string and woodwind controller that maps to those types of players as a natural interface. The future demands controllers that are not based on MIDI. We need high-bandwidth controllers that respond to the full range of human control inputs (ie. breath controllers actually sensitive over the WHOLE range of human breath, not just sensitive to a small portion of it), and we need rich data to feed to instrument models things like string pressure, bow speed, angle, position relative to the bridge, etc. This is what will transform static samples into real modeled instruments.
Commodity manufacturing. Production volume lowers price. The price target for a basic consumer controller should be sub $50. At this price it can be an accessory to a cell phone. Artist versions will always be more expensive, sensitive, and profitable. Some things won’t change. Already a Eingenlabs Pico can be purchased for about $500 USD. Volume production should allow it to get under $250 and new materials will enable future devices to go well below that, particularly if the large MI companies get involved. New nano-materials will be custom built for controller bodies at very low cost, with integrated sensors that do exactly what is needed. Then we can have a continuum from a musically useful “game controller” all the way to artist grade instruments. Open hardware spec will enable cottage industry for high end craftspeople to remain involved. The large MI companies should welcome this, not fear it. Significant innovation comes from the small shops, and they pose no commercial threat since they have such low volume production.
Open software. Just like computers and phones need an operating system, Digital instruments need a host application. This application needs to accept the standard inputs from hardware. The software needs to provide:
– cellphone, tablet and laptop connectivity as a plugin to music software
– MIDI 2.0 output of rich expression data – high bandwidth, nuanced data
– Extremely low latency – should feel analog in its response
– ability to host arbitrary software instruments (AU’s, VST’s) – i want to create and play my own instruments
– Map and save control surface parameters
– loop-based music creation software – like Reason/Ableton – 8 tracks to start
– connectivity to cloud-based services
– some effects like reverb or high quality instrument models are very computationally expensive for a phone. If the digital performance information is rich and nuanced, then better quality models can be accessed in the cloud, and the final project rendered there, rather than on the device.
– storage of projects, patches, etc
– publishing songs/social media interactions. This needs to be open so that social media vendors like SoundCloud, Facebook, etc. can write conduits to post material to their sites.
Cloud-based Features. By 2023, $1000 laptops will have the computational equivalent power of an entire human brain. Cell phones will have far eclipsed what even a high end Mac Pro can do today. But even a Mac Pro stuffed to the gills, is still stronger for being connected to the cloud. “Saving” a file locally makes little sense if it can be backed up, automatically distributed, and shared in the cloud. Collaboration is almost more important than individual creation. What I make needs to be shared, edited, mastered, produced, integrated, synced to video and a thousand other things. if we have plug-in delay compensation today, we can have network delay compensation in the future. If I am performing on someone’s song with my digital controller, they can stream the bits to me, and my playing nuances can be recorded in their software as well as mine for future editing. I can “monitor” a sound off my cellphone, while the cloud renders a much higher quality model through a digitization of a killer acoustic space, and downloads it to my phone, for playback within a few seconds after I finish playing. Recording studios will be set up for workflow, primarily, but some will have traditional rooms to record acoustic instruments. Many projects will never use a studio at all. Digital instruments interfaced through a cell phone will be able to access the best instrument models on the planet. Just like you can rent varying qualities of instruments today, you will be able to “choose” the software model you wish to use. Better models are bound to be more expensive. The cloud will make computational expense a non-issue for instrument design and effect quality. Hardware DSP will be completely unnecessary, except perhaps as used in the cloud itself or for large studios. Sound designers that make highly expressive instruments for people to play on their controllers will be very valuable. Performers will play custom virtual instruments – even commissioning them for a concert or a recording session. The cloud functionality needs to be fully open, distributed and heterogeneous, and all this will develop organically.
Music Education 2.0. Instead of learning to read and play music through out-of-copyright folk songs, or even quality classical composers, we can create a whole online academy. Just as Khan Academy has revolutionized math instruction, the same opportunity exists for music education. Levels can be created, lessons given, technique evaluated based on analyzing the streams of data coming off the instrument. If someone is pressing too hard, or not hard enough, or is consistently inaccurate in some way, this will be easily diagnosed. As nanotechnology biological sensors become common, it will be possible to wear “sleeves” that monitor muscle movement and feed that data back for analysis and trending. Even analog instrumentalists will benefit from this kind of training. It will do for musicians what treadmills, VO2 max testing, and power meters have done for sports. Once the neuromuscular activities can be quantified, then deep analysis will help students to progress at the optimal rate. Exercises and skills can be tailored to each student. Teachers can be local or remote. Control surfaces can be mapped to their level of development and progressively “unlocked”, wrong notes can be “mapped out” of beginner’s concerts, increasing confidence. Keys can light up to “show the way” through a new piece. Advanced performers can join virtual ensembles or jam with increasingly more difficult rhythms and harmonies. Smart software should automatically adjust if student performance is inadequate. Perhaps best of all, music performance can be clearly separated from music theory in a way that has never before been practical. Since controllers will be able to be both simple AND expressive, it will open up a whole new world of composition and performance opportunities. Imagine a keyboard instrument that allowed a beginner to fire off a loop that plays what a more advanced student might play live. This will allow everyone to participate in music making, and even allow students and virtuosos to share a stage in a way that is just not possible with analog instruments.
Musicians will perform in virtual performance spaces – all of which will sound as good or better than physical spaces. Some will perform as themselves. Others will have avatars that are as well-known as any international star today, but will allow their creators to move with anonymity in the real world – a welcome relief for many creatives. The fame-hungry will, of course, pose as themselves. Holographic techniques will allow musicians to appear live in multiple places at once. Real, physical performance will command a big premium. Virtual attendance to events across the globe will be commonplace. Fans will be able to attend every concert on a global tour if they want. The kind of immersive technology that underpins 3D gaming will be available to make interactive concert experiences. Different concert goers may have different experiences of the same set, or even get to play along with the band for extra charge! If you know the parts, why not play them yourself? Creative producers, musicians, video, FX, and artists will collaborate to make incredible multi-sensory shows. When physical barriers are removed, much more collaboration will be possible between highly-skilled musicians, and it will be possible to have jam sessions with people on the other side of the globe, barring sleep cycle issues. Bands will be able to sell “controller maps” or “performance walk-thru’s” to their fans as an extra revenue stream. Sounds from the recording will be purchased, or even multi-track master access for remixing or inserting one’s own solos will be available. There is a whole world of interactive content that the best musicians will create allowing others to interact with and co-create with them. The most popular musicians will be either great performers, great collaborators, or great enablers of masses of people to interact with music, or all of the above. Musicians that can make highly immersive sonic environments that adapt to different skill levels and deliver a quality experience will be as popular as any video game. If controllers are cheap and plentiful, and music education is natural, and easy, then music can again be an interactive experience, not just a concert or a recording. When everyone can learn to make some simple gestures and “play music”, then the world will be a richer place. Put all those people on the Internet, and some amazing new music will result!
There is a reason that all cultures use music in their worship and that often top musicians honed their craft in either churches, bars, or both. At first this might seem contradictory: what could be more opposite than a church/temple/synagogue and a seedy bar? But, humans go to church or to bars when they wish to alter the condition of their inner self. Whether the goal is to relax, forget, remember, celebrate, or reflect, seeking the Divine or seeking chemical remedy scratch the same itch. Music is typically present at both, and both venues try to have high quality musical performers since there is a link directly to the financial health of these orgnizations. Why is this?
In his excellent book, Thou Shall Prosper, Rabbi Daniel Lapin explores the difference between spiritual and physical characteristics of our task performance. Applied to the pursuit of musical excellence, there is much for us to understand and implement, for the act of making music involves both the spirit and the body. But before exploring this interconnection too deeply, it is significant to understand his use of terms.
The Rabbi begins by explaining that he does not use the word “spiritual” to convey any religious connotation – it merely conveys reality outside the physical tangible world. That such a world exists is easy to demonstrate. Our feelings may have an outward visibility at times, but they exist for us apart from any external reality. Similarly, we can destroy a book, but we have not in any way invalidated any truth it contains. The ideas are still available for others – the truth is transcendent and can be re-discovered, or rewritten by another. Hence “truth” is a spiritual reality. The physical is similarly easy to understand – it is that which is externally observable. We can measure, quantify, or observe it. For the musician, our instruments are physical, and the movement of our limbs and digits to produce music are all physical in nature. A “C-scale” is a spiritual concept; we can physically express it, but the number and arrangement of pitches are not material substance.
The Rabbi Lapin expounds three rules governing the interplay between the spiritual and physical:
- Physical things can be destroyed, whereas spiritual things cannot.
- Physical things can tolerate imperfection; spiritual things need to be precise
- The spiritual element of an event must precede its physical actualization
We will consider each of these in turn and also consider their applicability for the musician and add a few observations of our own.
First, physical things can be destroyed, whereas spiritual things cannot. Spiritual things operate on a higher plane. For example, if I am playing music on my guitar, destroying my guitar would remove its usefulness as an instrument. Destroying the guitar would not have any impact on the existence of the music itself. I could hum the tune, perform it on a different guitar, or even transpose it to the piano or a harmonica. Even if I am not present, another musician can play it, or it could be on the radio. The music itself is of a spiritual essence, not a physical essence. Whether I live or die, music I compose can continue to exist without me. The music of Bach, Mozart, John Coltrane and thousands of other musicians still exists today, unchanged by the passage of time. Classical musicians often perform, study and are emotionally (spiritually) touched by music that is hundreds of years old. As an aside, this is why all totalitarian regimes fail – they mistake killing the bodies of dissenters with eliminating the ideas of freedom. The ideas live on and the truth of human freedom and power of spirit ultimately prevails in the physical realm as well. It is also why it is so hard to keep anything a secret – the spiritual facts related to the event cannot be destroyed.
Practically speaking, this is why artistic judgment is so important. What ideas and concepts are so worthy that they should be permanently exposed? By giving birth to a spiritual product of music, art, dance, etc what are we giving life to? Do the concepts improve the human condition, elevate the understanding of the mind or spirit? Does our art hold hope or hopelessness? Truth or lies? Just as there are positive spiritual elements (joy, gratitude, and dozens of others), there are negative spiritual elements (hate, greed, anger, and so on). When we create our art, what are we setting eternally in motion? What vibrations do we pick up and amplify to a wider audience? To the extent that our music represents a spiritual truth, it will contribute forever, for good or evil. The issue of artistic intent and execution are not merely something for critics to write about after we are dead, but an important extension of this rule. We are responsible as artists not just to artistically document what the prevailing culture gives us, but to rise above it and breathe life into spiritual concepts that improve the world around us.
Second, the Rabbi states that physical things can tolerate imperfection; spiritual things need to be precise. This can be understood from looking around your home. Consider the most used pot in your kitchen. How many scratches does it have inside and out? Do the scratches seriously affect your ability to use the pot? Not at all. In fact, for cast iron pots and pans, years of use “season” the pot and help it deliver better results. Now consider how often a technical or performance glitch ruins a movie or a recorded piece of music you are enjoying. Not very often. The standard for recorded performance is very high. Conversely, how much do people pay to watch first band concerts? Not much, if anything. No one wants to hear missed notes, botched lines, or see poorly filmed movies. If we want to listen to something repeatedly on an iPod, any imperfections become highly annoying. Why? The content is spiritual. Our spirits (i.e. – that non-physical part of us) are sensitive to even the slightest spiritual imperfections. If imperfections cloud a perfect spiritual concept, they compromise the whole experience to an extent that renders the art unappreciated.
This explains why it takes so long to perform confidently and competently in public, and why we place so much value on exceptional musical performances. The content of the musical performance is spiritual in nature, but must be expressed and apprehended physically. This means that before anything spiritual can occur, all the physical must be in alignment. This includes the performance space, the amplification or lack there of, the tuning of the instruments, etc. It extends to all the physical training the musician has performed to be able to move freely and naturally around the instrument. Technical facility takes years of effort. But even with the physical concerns adequately addressed, there is still the matter that the content is spiritual. As spiritual beings (i.e. there is a non-material component to our existence), we are capable of understanding and expressing spiritual concepts. In practice, the condition, sensitivity, and training we have performed spiritually influences to what extent we will be able to express the concepts in the physical. Lining up the physical and spiritual together is what brings power and credibility to an artistic performance. When they are disjointed, the impact of the work is lost or muted. Knowing how to manipulate an instrument, the music, and our physical and spiritual selves to produce the desired effect is what mastery means for the musician.
This leads directly to his third postulate: The spiritual element of an event must precede its physical actualization. For the musician and artist, this is almost self-evident. Before any muscles move, the brain must know what sound or effect it is after. It then calls the muscles into motion. Whether this is by reading from sheet music, or free improvisation, we translate some spiritual understanding of music into physical reality. Even flailing randomly about one’s instrument is still first an expression of an inner condition or decision to flail about. The physical follows the immaterial and spiritual.
It turns out that this has massive implications for the artist. If spiritual things require higher precision and must precede physical things, then should not a great emphasis be placed on development of spiritual characteristics? Could we actually expect that the spiritual content of our art might predict its success or failure? It is perhaps unsurprising that whether in India or in Germany, the most enduring music of each culture comes from religious roots. Both cultures have music that is hundreds of years old still being actively performed, even if it is now divorced from its religious roots.
It makes sense that people engaged in the pursuit of spiritual development through religious activities could expect to become more spiritually sensitive and refined. Translating this understanding into music should result in a stronger, more accurate spiritual expression. It ultimately will not work to try and express great spiritual ideals of love, truth, healing, hope, etc. while fuming with anger, impatience, laziness, etc. They are incompatible ideals. So, the greater spiritual development that occurs through religious studies translates into more durable and resonant art. Finding a place of artistic power is actually linked to understanding the role of spiritual development upon one’s art and synthesizing both spiritual and physical excellence into a consistent communication.
Spiritual things can be graded both in intrinsic excellence and execution. It is a mistake to think that because spiritual truths transcend physical reality that they are all equally valuable and useful. Just as we can observe the difference in physical execution between the finger-painting of a child and the mature work of the “Old Masters”, some spiritual truths are so powerfully expressed that they resonate for hundreds or thousands of years. Why is Handel’s Messiah or Bach’s Bmin Mass performed hundreds to thousands of times each year when other music from the period lies virtually unknown? Some works contain more spiritual content, of higher quality, and of greater development, and therefore offer a more precise illumination of spiritual truth. For these reasons, we as a human race are drawn to some spiritual expressions more than others. This is exactly why some works still have emotional and spiritual impact hundreds of years later.
Practically speaking, this is why there is so much music dealing with faith, hope and love. Listen to the radio. How many songs espouse a hope for a better future or for redemption of past mistakes? How many love songs are there in recorded history? Love is the greatest spiritual concept in the Christian Bible (I Cor 13:13), and so it is not surprising to find the most songs about it. This explains why the introspective, navel-gazing kind of whining that immature songwriters sometimes produce never achieves any great impact. The spiritual content is weak, even if the playing meets expectations. My own spiritual angst is not interesting compared to contemplating the greatest spiritual virtues and exploits of mankind. If we wish to improve the impact of our art, we need to improve the quality of the ideas it rests on, as well as our ability to convey those ideas.
Consider Handel’s Messiah, a great masterwork of western civilization – it is faith-based music about redemption from human failure, hope for the future, and the love God has for humans. It is drenched in the three most powerful spiritual concepts, and then executed with precision, propriety, and complete artistic mastery. The music perfectly matches the mood, grandeur, and themes of the chosen content. Its performance requires dozens of skilled musicians to even get the Overture expressed let alone the whole piece. The soloists are most often professional musicians with years of vocal experience in this style of music. The choirs rehearse for months. So it is natural that it powerfully resonates its spiritual content across cultural differences and time. High spiritual content meets extensive physical preparation and a masterwork unfolds. It is little wonder that it is has been continuously popular since its debut performance!
For the musician, the potential consequences of these ideas loom large. When the spiritual content increases in value, purity, and weight, we must correspondingly increase our spiritual capacity, sensitivity, and physical preparation in order to attempt to convey our experience. The more transcendent the revelation, the more transcendent our own spiritual capacity and physical technique must be to translate the revelation into musical art. Making music is, therefore, first an inside job – overall spiritual development is equally as important as musical training.
Ironically, anyone could point out that highly effective musicians are not always instrumental virtuosos. Their effectiveness comes instead from spiritual power, channeled through merely adequate musical channels. The depth of their spiritual understanding can be conveyed outside of the music itself through non-verbal (or audible) signals. All musicians that aspire to greatness must posses spiritual sensitivity in the non-religious sense. Spiritual truths resonate wherever they find agreement. If you wish to amplify the spirit present in a work, you will have to resonate with it, and express it from the core of your being – spirit-to-spirit. This demands great internal consistency between music and musician so that the message is pure and delivered with maximum potency. If the music and the musician not aligned, the spiritual impact of the music will be degraded to the degree this is true. This resonance effect can be so strong that it can elevate our perception of musical skillfulness.
But if we are truthful, this requirement for internal spiritual consistency and the transcendent nature of the content demands a transcendent delivery. The more exquisite the truth, the more care and attention we ought to pay to its delivery. In some sense, an offering of the performance arts are an exquisitely lavish gift. The effort of a lifetime, or some significant portion of it is poured out in a few minutes. We can record the experience to capture some of the magic, but nothing matches the one performance. It is a unique and precious thing. For some, it may be the only experience they ever have with you or with that particular experience. Performance art is often a “once-a-lifetime” event. For these reasons, our handling of the matter deserves our best in physical preparation. A few botched notes, or a poorly timed entrance can snap someone out of spiritual revery to focus on the merely mechanical aspects of the service. Long hours of practice, and dedication to craft are a correct response to understanding this need for transcendent delivery.
Ultimately, this interplay between physical preparation and spiritual discernment becomes the primary work of the musical artist. Both must meet together in disciplined harmony. When the two are fused, the power of music to directly influence the soul combines with the innate spiritual capability of all humans to engage in spiritual activity, and transcendent spiritual experiences become normal. When this becomes the normal response to our art, we can be confident that we have reached a significant level of mastery.