Hold tight, this one’s gonna get nerdy.

Let’s take a little trip in the way-back machine to the dawn of the desktop computing era, that time period that we seem to be incapable of escaping: 1984.  It was in 1984 that Steve Jobs famously unveiled the first “modern” computer, the prototype forerunner of all we use today, the original Macintosh.  It wasn’t the first GUI, but it was the first commercial application of the idea that a computer has a mouse and desktop and icons and a What You See Is What You Get user experience.

The original operating system for this original machine was primitive on every level.  It ran on a 400 kilobyte floppy disk and that included the ability to do text-to-speech.  A marvel of engineering and design, yes.  Influential like The Beatles? Yes.  But a strong foundation for future computer operating systems?  Hell no.

The original Macintosh System Software had ground-breaking interface design, clever engineering to do a lot with a little, but ten years after that original launch the world had changed a lot.  Now it was 1994 and in the decade following Hello, Microsoft had turned Windows into a Mac competitor on IBM PC compatibles and Apple had squandered it’s first mover advantage and become something of an afterthought.

One lesser known, but massively important, thing happened during the decade of Windows’ rise to prominence, and it is that Apple fired Steve Jobs, shortly after the Mac launched.  Jobs then started a new company called NeXT (he also bought Pixar from Lucasfilm and created that whole company, which is, like, the hugest footnote to a career ever, but I digress) and NeXT needed an operating system for their cool new computer, the Cube.  Jobs didn’t want a clever bit of under-engineering (ala the Mac System Software) for his Cube.  He wanted what the big boys had been using since the late 1960’s: Unix.  So, he built an operating system on an open-source Unix variant called BSD Unix (Berkeley Software Distribution).  The resulting operating system, NeXTStep, was not a commercial success and neither was the company he founded.

Though not commercially successful in the way Jobs intended, two big things can be laid at the doorstep of NeXT and Jobs in this time period.  First: the guy who single-handedly created the World Wide Web, Sir Tim Berners-Lee, invented the Web on a NeXT computer.  So, by 1994, when we suddenly had the blossoming Web, when the internet started becoming a part of the culture, it was born on NeXT.  If that had been the only contribution of NeXT to the world, it would have been enough, but another thing happened.  Apple, struggling to avoid bankruptcy, brought Steve Jobs back as an interim CEO and one of the first things he did was buy NeXT and turn NeXTStep into….  tada!  Mac OS X.  Which then begat iOS and all the other flavors of Mac OS X.

If you own a Mac, if you own an iPhone, if you own an iPad, if you own an Apple Watch, or an Apple TV, or any Apple product made in the last 15 years except a clickwheel iPod, you have used the current iteration of the NeXT platform that Jobs launched after being fired by Apple back in 1985.  This also means that you have been using BSD Unix under the covers, whether you realize it or not.

Another 10 years later, 2004, and this was all obvious.  The WWW had taken over the world, Apple was back, Mac OS X was launched and headed towards success, Jobs was plotting the iPhone and iPad.  This is all history.  However, at the time, 1994, the big news was actually coming from Microsoft and their launch of Windows 95.  So let’s talk about that for a minute.

Windows was second to market with a GUI, and not technically superior to the original Macintosh operating system, but unlike Apple, Microsoft was determined to do something about it.  The previous version of Windows (3.1) was a 16-bit shell that ran on top of DOS.  Windows 95 was going to be a full 32-bit operating system like OS/2 Warp (Google it, it was a thing at the time) and what’s more it was going to include a new Microsoft Dial-Up Service called MSN that would compete with AOL and Compuserve (no, there was no internet access yet, they missed that one).   In 1994 we all knew it was coming but we didn’t get it until August 1995.  I should know, I bought a copy on launch day.

Windows 95 did change the world.  The user interface conventions we take for granted on modern Windows computers, they all started on Win95.  It was for today’s Windows but the original Macintosh was for the modern Mac from a user-interface convention perspective.  What it lacked, just like the Mac of the era, was stability.  And that’s where the real story begins, not with Windows 95, but with the REAL progenitor of the modern Windows computer, a totally different thing called Windows NT.

Now, you might be thinking, I have heard of Windows 95, and Windows 98, Windows XP, etc., but what is Windows NT?  I’ll tell ya.  Windows NT, initially released in mid-1993, was a version of Windows that was designed around a new operating system kernel, the “New Technology” kernel.  A kernel, BTW, is like the heart of an operating system.  It controls the reading and writing of data, the execution of programs, communication with devices, all that stuff.  It is not the part you see on screen, with the windows and icons and stuff.  All of that is just the graphics.  So, back to NT.  The first few releases were intended for servers, not desktops, where they wouldn’t be asked to run games or general productivity applications and would also be expected not to crash.  During the second half of the 90’s, Windows lived two lives.  There was the one that normal users had (95/98/ME) and there was NT (which most users never even heard of).

The first time most normal users got their hands on Windows NT it was going by a new name: Windows XP. The years leading up to Windows XP had allowed Microsoft to develop a strategy (“compatibility mode”) so that apps written for non-NT versions of Windows could run on NT, thereby allowing them to migrate to a more robust core for their operating system, the exact same thing Apple was trying to do with OS X.  In the case of Windows, the core was built around the NT kernel, in the case of Mac the core was built around the BSD Unix kernel, in both cases the goal was to get users off the crappy 80’s foundation and onto something reliable.  You can have religious wars about the NT kernel vs BSD and the user-interface choices made by Apple and Microsoft, but in general, these two platforms making these major shifts created the operating environments for most of the devices we all use today including laptops, smartphones, and tablets.

Now, I’ve intentionally left something out of this picture and it’s a doozy.  Way back in 1991, a student at the University of Helsinki, Linus Torvalds, was learning operating system design with a Unix clone called MINIX and he was annoyed by the limitations.  So, he made his own and he shared it on the fledgling internet and the snowball was pushed from the top of the mountain.  His creation, eventually dubbed Linux (named for Linus himself) has steadily grown and improved and spread throughout the known computing universe.  By some estimates, over 90% of the servers on the internet run Linux.  In the world of servers and other computers that normal users don’t touch, Linux is the king.  You use it every day that you go online, and you probably don’t know it.

And for Android users, this is even more true.  Do you have an Android phone, tablet, or watch?  Guess what….  The NT kernel -> Windows.  BSD Kernel -> macOS/iOS.  Linux Kernel -> Android.

So, NT was designed so Microsoft could compete with Unix in the server business, but instead it became XP and (eventually) Windows 10.  BSD Unix was used by Steve Jobs to make NeXT, which became Mac OS X and Linux, which was a clone of Unix, took over the server business instead of OSX or NT and now it’s at the core of almost every mobile device not sold by Apple.

At the end of the day, Unix-style operating systems are OWNING and even Microsoft has figured this out.  Microsoft came out as huge proponents of Linux several years ago, principally spearheaded by the head of their Azure division.  If you don’t know what Azure is that just means that you aren’t a professional software developer.  It’s not a consumer product, it’s a server thing for people to run their apps on the internet, hosted by Microsoft, and it’s extremely Linux-friendly.  Likely, the folks at Microsoft realized they would have no choice but to support Linux if they wanted to have a cloud-server product since almost the entire server side of the internet is based on Linux.  And they were right.  The guy at Microsoft who ran the Azure division was a fella named Satya Nadella and if that name rings a bell it’s because he is now the CEO of Microsoft, having replaced Steve Ballmer who replaced Bill Gates.

OK, so, the guy who brought Linux into Microsoft is now running Microsoft, so what?  Where are you going with this Sutter?  Well, remember how NT was a server thing and then became the new kernel for Windows a few years later with XP?  Well, there is increasing reason to believe that NT might be heading towards being replaced with, you guessed it, Linux.

A couple of years ago, Microsoft introduced a new Windows feature called Windows Subsystem for Linux or WSL.  WSL allowed a user to run a Linux environment within their Windows environment instead of dual-booting.  I tried it out and quite honestly I couldn’t see a use for it.  If I wanted to run Linux, I could run a full Linux environment.  If I wanted to make Windows more Unix-like there were a number of ways to do that.  WSL seemed like a solution in search of a problem.  But then they came out with WSL 2 (aka: Electric Boogaloo) and things got more interesting.  To radically over-simplify: version one created an environment where native Linux applications could get translated to the Windows core.  In version 2, Windows can now basically run the actual Linux kernel as one of it’s own processes, no translation.  They have even announced support for Linux graphical applications (WSL was only a command-line thing before).

This is starting to sound familiar…  Mac OS X anyone?  Or perhaps Android?  Mac OS X is a graphical shell designed by Apple that happens to run on top of the BSD Unix core.  Android is a graphical shell designed by Google that happens to run on top of the Linux core. It isn’t an insane leap of logic to envision a world in which Windows becomes a graphical shell designed by Microsoft that happens to run as a process on top of the Linux core, (rather than the existing NT core).  The current evolution of the Windows Subsystem for Linux, the entire Azure cloud offering, and the fact that Satya is the CEO all point in this direction, potentially.

What this would mean is that we would have reached a point at which every major operating system is some variant of Unix.  Mac, Windows, Android, iOS, watchOS.  Microsoft already made the startling decision to give up on developing Internet Explorer and Edge and instead of make a new version of “Edge” that is, at it’s core, Google Chrome, (which itself is based on Webkit, the open-source HTML rendering engine that started on Linux in the KDE Desktop and was adopted by Apple for Safari on iOS and then Google and now Microsoft).  They’ve learned that nobody cares about the engine under the hood, they care about the look and feel and the apps they can run.  A Linux-powered iteration of Windows might seem like a leap, but frankly, it’s not.  People made the leap from Windows 98 to Windows XP via emulation and compatibility layers.  The same transformation today could be done less painfully thanks to existing Windows compatibility open-source like WINE (software that let’s you run Windows apps on Linux without Windows).  The merger of Win and Lin seems almost inevitable.

And I’m not the only one saying it.  Open source pioneer Eric S. Raymond has recently posited the same idea (http://esr.ibiblio.org/?p=8764).  I, for one, hope this trend continues.  With the release of macOS Catalina, Apple has taken some previously unprecedented actions towards making OSX into the most closed, most proprietary, least free, computing platform every built, a platform in which no software that is not blessed and sold by Apple will even be able to execute.  Their pending move to making their own processors will make them an even more radically closed platform as their hardware too will be strictly proprietary.  As a proponent of open-source, freedom to repair and the like, I can’t condone the purchase of such disposable and proprietary technology.  The Windows NT kernel has never been my favorite, I’ve always been a Unix guy at heart, so, the idea that Windows might finally transform into Yet Another Unix Variant is one of the better possibilities I’ve run into lately.  Bring it on, Microsoft.  If this is how the Unix desktop finally conquers the market, as bizarre as the road traveled may have been, I’m ready for it.

I had a bit of an epiphany last night.  I don’t know if it’s particularly profound, but I feel like my eyes were opened to a few truths that I have long known and simply forgotten to apply in my life.

First thing.  I have been a music person my entire life.  I have listened to music, thought in music, sang to myself in the car, in the shower, written my own songs, recorded music, performed music, learned instruments, collected music, obsessed over music.  I know people who maybe own two or three CDs and casually listen to Spotify, you know, normal people.   In contrast, I have literally thousands of albums in numerous formats: vinyl records, shellac 78s, CDs, cassettes, reel to reel tapes, digital files, you name it.  OK, I don’t have any 8-track carts, gotta draw the line somewhere, but I do actually own a functional hand-cranked Columbia Grafonola record player. 

I’ve personally been involved with and worked on the recording of at least 40 recorded albums or singles as either a performer, engineer, producer, or sometimes all of the above.  I have a recording studio in my basement.  I own dozens of musical instruments.  Guitars, basses, drum kits, keyboards, horns, accordions, slide whistles.  Hell, there is a documentary being made in which my musical endeavors and life’s work feature prominently. 

I say all of this to highlight the fact that you would be hard-pressed to find a person who’s life is more obviously centered around music, which makes it all the more strange to me that I’ve been so out of touch, emotionally and professionally, with music for the last few years. 

I have played in several bands and participated in the documentary, but I haven’t released a new album of original music since a minor acoustic EP that I recorded in a day back in December 2014.  I used to wonder if something was wrong with me if I didn’t release an album a year, at least, and I’m now coming up on six years with nothing to show for it except for the memories of some gigs played, a handful of unfinished projects and a few one-off songs or videos.  I have written and recorded things but I just haven’t been able to get into any sort of rhythm (pun intended) with my musical life.  I think that’s because I haven’t HAD a musical life.  Instead, I have been knee deep in the Miasma and it’s killed my sense of joy, wonder, and creativity.  At the same time, as a listener, I have allowed music to become a background wallpaper to my daily life instead of truly engaging with, appreciating it, eating, sleeping, and breathing it as I used to do.

The Miasma is a term I recently acquired from the book Fall; or, Dodge in Hell by author Neal Stephenson.  It is the catch-all term for the cultural wasteland of insanity, trolling, confirmation bias, misinformation, distortion, propaganda, bad blood, viral marketing, and lowest common denominator garbage that the modern internet has descended into.  Everything about the public discourse, the endless doomscrolling, the sheer end of the world nihilism of late stage capitalism, authoritarianism, stupidity, violence, and (bonus!) a global pandemic, it’s all so disheartening, so maddening, that turning on a television, reading a newspaper, looking at a social media feed, or visiting nearly any part of the internet for any reason is guaranteed to make whatever mood I am in worse.  Good moods become bad moods, bad moods become dire.

Instead of using music or meditation or poetry or art or any of the other tools at my disposal to counter the effects of the Miasma, I have fallen into an engagement trap based on the fact that, at one point, I used to love the internet.  I did.  I believed in it.  I thought it was a net-positive for humanity.  In the world before the web, communities were more physically isolated, knowledge harder to access, there was much more terra incognita.  The promise of the web and the connected digital society as laid out by luminaries like Ted Nelson, Vannevar Bush, Nicholas Negroponte, Alan Kay, and even Steve Jobs was so appealing.  It was almost like a second Enlightenment Age dawning.  All the worlds knowledge available, all the communications barriers broken.  How could this be anything other than an Objectively Good Thing?

Well, as it turns out, every silver lining has a cloud.  As it turns out, people were not historically hostile and tribal merely because of limited communications technology or limited access to information.  People are hostile and tribal because they have been made that way through billions of years of natural selection.  They require almost no incentive whatsoever to pick sides and develop animosity towards each other.  Kurt Vonnegut nailed it with his granfalloon concept.  Thanks to this programming, hyper-connecting all the people was always going to mean that the people who thrive on rancor, discord, and negativity would have louder voices and more power to shape our culture than they did before.  Capitalism, which naturally goes where the market leads, would naturally find ways to monetize and stoke this hostility and division in order to make money.  Religions and political parties would do the same, feeding the flames to advance power and agendas.  These are not new forces in human society, they existed as far back as written history records and likely much further back.  It turns out that the previous limits imposed by geography, technology, and access to information were also holding some of our tribalism and collective insanity in check by channeling it into narrow and somewhat isolated outlets.  That is no longer possible.  Thanks to the democratizing power of the internet, we now have all of the foibles and ridiculousness of our species running amok, unfettered, unchecked by any force, Enlightenment 2: Electric Boogaloo has given way to Idiocracy 2: Boogaloo Now Means Race War.

But wait a minute, I hear you saying, wasn’t this post about music?  Yeah, yeah, I’m getting there, it’s my blog, I wanna take a while to get to my point, that’s my prerogative.  Keep your shirt on.

OK, so, the Miasma was probably inevitable, in retrospect, but I didn’t anticipate it.  I believed, perhaps too strongly, in the positive and empowering aspects of the always on, hyper-connected, society.  I thought it would lead me to more creativity (more ability to share what you create is good, right?), more human connection (all my old friends are here, that’s gotta be good, right?), and all the old hassles of primitive technologies would be rendered obsolete by the wireless, simple, one device to rule them all vision of the smartphone as digital camera, digital music player, GPS, movie player, social life, VR headset, internet information appliance, dessert topping, floor wax, etc.   I was an early adopter.  I was a proponent.  I was a fan. 

I was wrong.

The all-in-one device is a marvel of convenience, but it makes focused attention on any one single thing of value extremely challenging.  Always being connected is great for knowing where to find a gas station while driving in an unknown area or for settling a bet about a piece of trivia with a friend, but it creates a constant psychological drag on the real world experience of every day life because you often feel compelled to use it just because it’s there and you’re bored for 5 whole consecutive seconds.  A globally connected platform for delivering creative work to audiences is theoretically empowering for artists, but since everybody throws everything out there, nothing feels special or unique or lasting, almost everything feels ephemeral, transitive, meaningless, like a night at an open-mic where the entire audience is on-stage at once, talking at the same time. 

In the Miasma, all of these things that hypothetically could have been enriching, empowering, and inspiring have mostly turned to shit.  Devalued, corrupted, monetized, destroyed, and we as a society have been lessened to the extent where Donald Fucking Trump actually became President of the United States.  Think about that.  As far back as the 80’s that would have been the punchline to a joke about America failing as a country and IT.  ACTUALLY.  HAPPENED.

I can chart my decline in creative interest and output on a graph (yes, I’ve actually done this on paper) and it directly correlates to the rise of the post-Facebook/post-iPhone Miasma version of the internet.  My flagging interest in saying anything whatsoever to the world at large, my increasing disinterest in my OWN MUSICAL WORK, my general sense of despondency about anything, or anyone, anywhere, truly mattering at all, my ever deeper struggles with the blank page or blank tape, it all correlates perfectly to the amount of time I have spent online since the New Enlightenment turned into the Miasma. 

The question is, what is a boy to do?

The internet I fell in love with is gone, for good.  The world I grew up in is radically changed.  No use looking backwards, it is what it is. I can limit my online time, work on my mindfulness, and swear a lot, but it I can’t undo what’s been done.  This is where my job is.  This is where my friends are.  This is how the music and tech industries function.  If I want to work in technology and/or be a creative, I can’t pretend the cultural landscape is what it was 12 years ago. 

I think the answer, ironically(?), is hinted at in trends I am beginning to encounter in the habits of the generation being raised with hyper-connectivity and social networking since childhood.  They are not enamored of apps and smartthings, they don’t think they’re especially cool or interesting, and they don’t inherently think the digital stuff is better or worse than what came before it.  It’s all just tech.  This is why a lot of people these days are, apparently, rediscovering mixtapes made with actual cassettes.  I did not foresee cassettes coming back, but they are.  Why?  Making mix tapes with your own voice and choices of songs was fun when I was a kid and it’s still fun now.  Who cares that you can listen to the same songs on your phone on Spotify?  That doesn’t feel unique like a tape does.  Another example, my niece became obsessed with typewriters at age 12 despite having a smartphone and tablet.  People who didn’t experience the migration from analog to digital to networked are not inherently biased against the old tools and can even appreciate their quirks and limits but mostly they appreciate the physicality, the reality, of analog. 

The Miasma is an endless stream of mostly negative messages masquerading as news, relationships, and information which is tailored to hook you, personally, and to shape your world and your view of it.  Unconnected technology only puts out what you put into it, there is no agenda, no secret influencers.  Maybe the way to get creative again is, in part, to only use tools and technologies that don’t try to influence my behaviors. 

And while I do think that’s a part of it, the real insight I had is that the flip side of the Miasma is how it makes you, me, everybody who participates, into both influenced and influencer.  We are all trying to culturally signify our alignments, beliefs, and affiliations.  We are all posting selfies and liking posts and crafting a semi-public persona as a type of performance art.  This is not an environment that fosters or encourages actual creativity.  In fact, it’s an active impediment because it creates the illusion of creativity.

Taking a photograph and applying some funny filter to it or cobbling together a meme is an act of creation, sure, but it’s more craft than art.  It’s more like making a hand-print turkey painting than it is like writing a confessional poem.  These types of minor creative output are mostly imitative or derivative, and the primary value is amusing other people.  These are all performance, but not all art is performance.

I recently read something written by Jeff Buckley in the liner notes to the posthumously released collection of material he was working on at the time of his death “Sketches for My Sweetheart the Drunk”.  He wrote the following about his songwriting:

There is also music I’ll make that will never-ever-ever be for sale. This is my music alone, this is my true home; from which all things are born and from which all my life will spring untainted and unworried, fully of my own body.

And this is something I have known for a very long time but I have let myself forget, the simple basic fact that you need to create first and foremost for your ears alone, for your heart alone, for your soul alone, if you want to have a home to share with others.  You can’t make that kind of art with the thoughts, feelings, opinions, or judgments of other people in mind.  You can’t be wondering if they will like you or what you have to say.  It’s not about them.  It’s the opposite of performance.  It’s self-exploration.  The more my life has become about the performances and manipulations of the Miasma, the more I’ve come to critically judge my own work and the less free I have felt to just play, explore, experiment, and enjoy the process of making music that nobody will ever hear.  I’ve been laboring under the false feeling that if I make music that I don’t think is “releasable” then I shouldn’t have bothered to make it.  When I was in high school sitting cross-legged on my bed with a four-track recorder recording ambient soundscapes about Tony Bennett or swarms of bees I wasn’t worrying about anybody hearing me or caring what I was doing…  I was having fun.

Fun.  Yes, fucking FUN.  Where is fun in 2020?  Where is joy in 2020?  Where is there joy to be found in the endless doomscroll of the Miasma or the viral marketing hellscape or the endless disgusting behavior of the bigots and fundamentalists or the constant manipulation of influencers and trends and memes and the barrage of messages and notifications and micro and macro time sinks of modern life?  I’ll tell you where it is.  Nowhere.  Missing in action.

And there, ladies and gentlemen, there is the key in all of this navel gazing.  Without fun, without joy, even the joy of painful catharsis (and yes, there is joy to be found in working through painful emotions, just think of the joy of relief when you remove a really bad splinter), what are you sharing?  What have you got other than an empty “look at me”? 

I’ve let the Miasma train me.  I’ve let it get me focused on publishing, producing, consuming and being consumed, constantly trying to drink a bottomless pool dry, and neglecting the square one of unplugging, playing, doing things just because they are interesting, making music for nobody else to hear, remembering that the bad news will still be there whether you look at it or not but that your soul won’t be if you won’t look after it.  When was the last time I just put on a record and listened to it without also being online?  When the last time I picked up a guitar and just made something up with no plan?  When was the last time I turned away from all screens, tablet, television, phone or e-reader, and just lived in the world of the actual senses? 

I am not sure.  I know that my entire life was spent in real space up to a point, and then it started digitizing, and it eventually wound up twisted around this shared online fiction we now call a culture, but the answer is not about “going back”, it’s not about “disconnecting”, it’s about remembering that the Miasma cannot provide meaning, it cannot provide true joy, but music can, real life can, and if I want to find that again, I need only remember how to play, how to write for myself and myself alone, and then to make a conscious decision to stop participating in the endless performance.

In my life I have generated and hoarded a lot of media.

Audio recordings, photographs, written documents, presentations, software, video, film…  It’s a little overwhelming.

I find it overwhelming in part because I’m a bit of a pack rat.  I never want to throw away anything that I might want later.  The longer I live the more cluttered my hard drives and shelves get.  There are literally hundreds of gigabytes of files and hundreds of physical items.

For the better part of the last decade I have struggled, unsuccessfully, to find a system for cataloging, organizing, and (most importantly) ARCHIVING all of this media so it stops cluttering up my life but doesn’t disappear from it.  I have tried many systems but they all break down relatively quickly.  Either they become too organizationally complex or the media itself becomes unreliable or I simply lose track of what has already been archived versus what has yet to be gone through.  This actually stresses me out.

Yeah.  I’m not normal.

“The Cloud” won’t work for me.  Too much stuff to deal with and paying for ongoing storage is not something I want to do.  What I want is a system that is:

  • Simple
  • Permanent
  • Affordable
  • Easy to retrieve media from

It would help if it also assists me by letting me find duplicates, tag content with metadata, and all that stuff so when I want to find that scanned baby picture from 1995 I can find it.

I think I may have finally found that final solution, the system I can rely on until I die, and here’s what it is.

There is a new type of recordable disc called an M-Disc (http://www.mdisc.com/) that has a DoD-tested shelf-life of approximately 1000 years and these discs are available in DVD and Blu-Ray formats ranging from 4.7GB to 100GB of storage.  They require a special drive to record them, but once burned can be read by any normal DVD or Blu-Ray drive.  They literally etch the data into carbon.  So, I’ve gone ahead and ordered myself an M-Disc Blu-Ray burner that can do BDXL (up to 128GB per disc).  Unlike flash drives, hard drives, CD-R, DVD-R, tape backups, or any other form of media I’ve ever used, these discs should be readable for the rest of my life, and the life of my child and any succeeding grandchildren I may one day have.  I can burn the data and never think about it again.  Ridiculous, right?  Maybe.  I don’t know.  I really value a lot of music and photography taken by people I’ve known and loved who are no longer alive.  I’m glad it still exists.  I will mentally rest easy when I know that all the media that really matters to me is permanently preserved.

Except…  except I still need to be able to find it and indexing and sorting hundreds of thousands of pieces of media is hard.

I’m not the only person to ever have this problem.  There is a class of applications out there called disc catalogers.  They index the contents of removable drives so that you can search the contents, find what you want, pop the disc in, and get the file.  I’ve used a few.  They all start to choke when they get to catalogs of any serious size.  I had given up hope but then I did some searching and found this article.  Apparently there is a Holy Grail on this front and it’s called NeoFinder.

Next week I hope to reach a point where I’ve finally got a permanent system and I can start offloading the massive quantities of media choking all my drives and cluttering up my life.  I’m going to archive it, catalog it, and delete it if I don’t really need it handy.

I’m basically drooling right now, I’m so excited.  Have I finally found The Grail?  Is the combination of 1000-year 100GB optical storage and The Ultimate Cataloging Application finally going to solve this problem for me?

I feel like it will.  I’ll report my results when I have them.

I woke up Sunday morning with the strong conviction that it was a day for recording music.  I poured some coffee and adjourned to The Nuclear Gopher Too1, as the sign reads on the door to my basement.

Important pre-requisites for a recording session at NG include comfortable footwear (preferably slippers), a coffee mug (there is a K-cup machine in the corner so you don’t need to BYOC), and most importantly, most vitally for true productivity, coming into the session with no clue whatsoever what you are planning to do.  This is a long-standing Nuclear Gopher tradition and explains most of the albums The Lavone recorded.

So, I’m kinda Buddhisty (it’s unfair to actual Buddhists who attend sanghas and follow a school or lineage to call myself a proper Buddhist) and I practice meditation.  The Buddhist term “monkey mind” is something I have great familiarity with and I have learned via practice that it can be a great help to spend a little time wrestling with the monkey when you want to create something.  When your brain won’t shut up, odds are you have something you might want to say if only you listen, and that could be the basis for a song, maybe even something as brilliant as “Shaq’s Been Traded to the Phoenix Suns“.  If you’re very very lucky.

The process then, is thus:

  1. Sit with coffee and slippers on
  2. Find paper and writing utensil
  3. Start writing crap until non-crap appears
  4. If the non-crap is non-musical, keep going, you’re hunting wabbits
  5. If the non-crap is musical, write as much of it as flows naturally and then go find a musical instrument that seems appropriate and try to play the nascent song
  6. If the song seems to pick up steam, keep at it, if it peeters out, go back to Step 3 and write something else or Step 5 and try a different instrument
  7. If you have chords and words and you can play the song in some way, it’s time to record!

This part of the process was easy yesterday.  Like, 10 minutes.  Lovely.  There wasn’t a song, and then, suddenly, there was a little bitty baby song.  Nice.  I had to try a couple different guitars and a keyboard before I managed to figure out what I needed to do to write the music but it wasn’t bad.  Excellent. Time to record the little bugger.

Starting a recording when you are working totally alone and have to be songwriter, performer, engineer, producer, roadie, coffee maker, AND stop yourself from checking Facebook or playing Tetris is partially science, partially art.  The tiniest bit of triviality can derail all your mojo, like, “Oh, I don’t know if I can drum this, plus setting up drum mics is a PITA, plus there are all these loops in this software I could use, and hey I have this digital drum kit, and damn I’m hungry, maybe I need some toast…”  Three hours later you have forgotten the song you sort of wrote.  Therefore it is my strong opinion that you treat developing a baby song like building a fire with damp tinder on a cold day.  You need to nurture the process in the early stages, keep at it, don’t let it die out, because it will and you will wind up with damp sticks instead of a blazing fire with toasted marshmallows.  Perhaps the metaphor has gotten away from me, but still, a song may start with a riff, a lyric, an idea, a metaphor, a feeling, a piece of cool gear that makes a noise that hurts your hair, but it’s not a SONG yet.  It’s the potential for a song.  The idea behind developing material by recording it is to build the song to find out what it is.

I was nearly sidetracked in the early stages yesterday, but happily I decided to just PSTFDOT (Put Something The Fuck Down On Tape).  That something was the rhythm guitar backbone of the song, recorded through a DI box along with a metronome.  In the process of doing that, I figured out song structure.  I had written two verses and a middle part, so verse/bridge/verse was the obvious song structure.  But I thought maybe I might write more verses or something so I decided to go verse(lyrics)/verse(musical)/bridge(vocal)/verse(musical)/verse(lyrics) which would either make room for another verse or would make a cool kinda of palindromic structural symmetry.

Equipped with a song structure, a draft of some lyrics, and a mostly accurate performance guide guitar and metronome track, I plopped some cans on (us recordists call headphones “cans”, but in my case it was literally two cans of mock duck strapped to my head, as is customary to do in my country) and I sat down behind that intimidating beast…  The Drum Set.  After replacing the mock duck with actual headphones, I set about composing the drum part, which consisted of hitting things, swearing, wishing I was a better drummer, clicking repeat, and ultimately reaching a sort of zen space in which I could practice non-attachment in relation to perfecting my drums on a song I would chalk up to a demo and probably re-record and most likely would just replace my drums with Battery 4 MIDI stuff anyhow and god dammit.

Once I had successfully drummed the part twice in a row without screwing it up too badly, I went into engineer mode.  This consisted of setting up the drum mics.  Now, everybody says miking drums and getting a result you don’t hate in a small studio is super complicated.  Especially without sound treatment in an unfinished old farmhouse basement with bumpy limestone walls.  But here’s the thing: digital plugins can hide a multitude of sins and if you’ve experimented enough to know your gear and you keep it generally simple, it can be done.  Over the last couple years, I’ve settled on a basic 3-5 mic approach that works for me.  Details in the footnote2.  After setting these up, along with the laptop/mixing board back behind the drums where I can hit record/play, I laid down the drums.  It was definitely less painful than it has been in the past.  I got it in maybe 6 takes.

Now, going back to my baby fire analogy, getting from “I think I should make music today” to “lyrics written and guitar and drum tracks recorded” is like moving from “shit it’s cold” to “how could you forget the marshmallows again?”  It’s great.  Momentum starts to take hold.  There’s, like, an actual song there.  It’s not done, and there are still 23 Pictures of Adorable Wallaby Babies on Teh Internetz but you’ve got something.  You’re not just feeding pine needles to matches and cursing your mother for bringing you into the world.  This is when you remember why you have this stupid hobby.  Because it’s FUN.

At this point a new phase begins.  The phase of OPTIONS, oh so many options.  This is the part where you can be like “Zither!  I need zi… wait, no, how about I plug my guitar into the waffle iron..  or, no, wait, SYNTHS!  I downloaded this awesome soft-synth with 110 virtual buttons and knobs that combines the Rokorg Moogaphonaprophet SEM-80 with the MiniBooger Whapdoodle Modular 17-Voice and it has a preset only dogs can hear!  Let’s try that!”  The thing is… if it makes noise you can record it.  And maybe you should, but taking a minute to find your coffee cup, take a deep breath (and a swig of the coffee that is now cold because you forgot about it earlier), and seriously deciding what you might be aiming for is usually helpful at this stage.  I opted for vocals.  I knew that part was going to have to happen, I wasn’t sure what else, so I figured that maybe filling out a known piece of the puzzle might bring clarity to the rest when it happened.

I plugged in a large condenser mic and, while still standing behind the drum set, laid down a vocal track, then a double of it, then a harmony on a couple parts, then a double of that and, voila, vocals.  What then?  Piano?  Keys?  Bass?  I wasn’t sure.

I resolved the dilemma by experimentation.  First I tried some synth pads, nope.  Then some synth bass.  Uh uh.  I thought about taking out my bass guitar but wasn’t in the mood and it was several feet away from where I was standing, so…  I tried some sampled strings.  Nope.  Horns.  Nope.  Grand piano…  Grand piano?  It was working but I wasn’t sure how I wanted it to go and I didn’t want to compose a piano part quite yet.  Backburnered the piano.  Then I remembered I recently acquired an ancient Crumar Roadrunner digital piano from the 80’s.  I decided to try that.  The piano sound of it was wrong for most of the song but I liked it on the bridge.  Even more importantly, the bass sound was excellent.  I worked up a bass part and started recording it.

This particular instrument lived in a shed for years before I bought it off Craigslist.  It is filthy, and has many keys that don’t work.  It was also out of tune.  I managed to adjust the pitch to get it in tune, and the keys I needed seemed to work so I started tracking.  I had the whole part nailed except for one flub and decided to delete that track and take another go at it and at that moment the E-flat on the bass portion of the keyboard stopped working.  I needed E-flat.  Damn.  I could no longer play the part I wrote.

I was bummed until it hit me that I might have a fallback.  This is where taking stock of your gear can save your ass.  I had recently made a list of all instruments I have, as well as the “virtual instruments”, namely, emulated keyboards and softsynths I have in software on my MacBook or PC.  I entered it into a Google Drive spreadsheet.  I also cataloged all the modeled guitars and amps and all the effects plugins and what they do.  I still have to go through guitar effects pedals and emulations.  Anyhow… I knew there was some Crumar stuff in the list so I looked and, sure enough, the Roady (bass and e-piano!) was sampled in the Retro Machines plugin.   I pulled out a USB keyboard and brought up the Retro Machines thing and sure enough, there it was.  Practically indistinguishable from the real thing.  It sounded exactly the same as my real Roady, but less noisy and with keys that all worked.   I got the bass and electric piano parts I wanted in two more takes.

At this point it was 12:15 and I realized the Vikings and Packers were facing off upstairs on the television machine.  I had enough song recorded to trust that I would be able to return and complete, so I went upstairs to eat and watch the game.  The Vikings lost, so maybe this was a mistake, but you live you learn.

Food and a break gave me the energy to come back at the song and revisit the grand piano.  I opted to use it, but slightly sparingly.  Then I felt like those musical verses on either side of the bridge needed some sort of lead part…  analog synth?  Guitar?  I wasn’t sure.  I tried the synth first and couldn’t find anything I was happy with, so I plugged my Les Paul into an over-driven tube amp head turned down to 5W of output, plugged into a cabinet, and close-miked with my SM-57.  I also ran the signal via DI to a second track on my DAW in case I might want to re-amp later.  If none of that made sense to you you’re probably not an audio engineer.

I did four takes of lead and I wound up panning my two favorites to opposite speakers so I wouldn’t have to choose between them.  The resulting dueling guitar solo thing made me happy, even if I hadn’t planned it that way.  Finally, I figured out what I wanted to do with analog synths.  I wanted something nasty and sawtoothed during the bridge and bookend bridge guitar solos that would feel a little like the Mellotron from Watcher of the Skies.  Like a pad, but one that was a bit discordant and ugly.  I patch surfed until I found something that fit the bill in Arturia Analog Lab and then that was done.

It was at this point that I realized I had forgotten something.  I had built the song up and up without ever replacing my humble little initial guide guitar track.  I originally recorded an electric guitar through a DI and I really wanted an acoustic, so I pulled out my Martin and tuned it up.  Only problem was, the dogs upstairs were barking like crazy.   I was afraid to mic an acoustic and wind up also recording Barky Bark and the Furry Bunch.  So, I reached for a stick-on piezo pickup and hoped that would work in the mix.  It turns out that it worked very well, because it was nice and bright and percussive and the rest of the mix had the bottom end taken care of.

And that, as they say, was that, as far as general tracking was concerned.  I slapped some placeholder dynamics plugins with reasonable presets on the various tracks and did a quick and dirty preliminary mix down to throw out on SoundCloud and listen to throughout the next few weeks in different settings.  I will critique it, make note of mistakes that need fixing, check the sound in various listening environments like my car, my different pairs of phones, my two sets of monitors, etc.  I may opt to re-record some things or edit some things or re-amp or re-equalize, but I think I’m keeping the recording as a whole, it turned out.  Sometimes I just decided I would like to re-record the whole thing, but not this time.  This doesn’t guarantee it will be on an album or that I won’t change my mind, but that was the process from baby fire to marshmallows in my tummy as I got into my tent to sleep for the night.

I hope sharing this experience was interesting.  Here is the song, complete with random animated GIF music video:

goo.gl/QwCyud

And here is the plain SoundCloud player:

Thanks for reading!

1 The Nuclear Gopher Too is, of course, the spiritual successor to the original Nuclear Gopher studio which is now an exercise room in my dad’s basement.   I’m sure all you Lavone fans from way back already know that.  Hah.

2 I nearly always use the same kick and snare mic (E/V N/D868 very near the soundhole of the kick and Shure SM-57 on the snare, usually on the top, sometimes the bottom, for those of you playing at home) and then I mess around with overheads and room mics.  I own two ribbons (a Cascade Fat Head II and an MXL R40 that I modded to not sound shitty), and several large and small diaphragm condensers, including a matched pair of AKG condensers that I often put in an XY configuration for that hip stereophonic sound all the kids are raving about these days at the malt shop.  My recording technique is: get the kick, snare, and overheads to sounding at least 85% right straight off the mixing board.  I set the board up with a laptop on a little table back behind the drums with me.  I try to minimize bleed but don’t usually panic about bleed issues for kick and snare because my channel strip plugin will gate that shit right out.  Overheads need to be EQ’d pretty close to the tone I want and level-set correctly, but that’s about it.   If I am working on a song that heavily features toms, I may add a couple close dynamic mics to that part of the kit.  I pretty much always add a channel strip plugin to each track KICK/SNARE/OH1/OH2/TOM1/TOM2 and then create a folder in Reaper for the lot of them and and add a bus compressor to that.  The result tends to sit pretty well in just about any mix and if your kick and snare are solid in terms of levels and bleed, you can use the signal to trigger MIDI to replace those sounds with something better later using Slate Drums or Battery or something, so my snare isn’t great but I don’t lose sleep over it.   Yesterday I opted for a three-mic setup, kick, snare, and the R40 overhead, angled towards the hi-hat/crash and fairly low.

I will shortly be heading out to a day-long game development workshop called A Day of Unity which will give me an opportunity to port Flutter HD to Windows Phone.  Flutter, if you’re unfamiliar, is the game that my buddy Travis and I recently developed, and you can check it out for most platforms, including playing for free on Facebook, at http://www.sheepshapestudio.com.

The event is hosted by Microsoft and there was a time in which that in and of itself would be sufficient to make me question the value of the event.  When I moved from Windows to Mac in 1997/98 it was kind of a big deal.  I had built and owned a succession of Windows PCs going back to Windows 3.1 in 1993, I had a Windows NT laptop at work where I developed applications for, you guessed it, Windows.  Growing up, of course, I rarely used Windows or even DOS for that matter.  I had a Commodore VIC-20, then used Radio Shack TRS-80, Commodore 64, Apple II, and occasional Macintosh machines in school.  I had a friend, Stacy Jackson, who had an IBM PC running DOS and she introduced me to Sierra’s King’s Quest series on it, but as a general rule I didn’t encounter Microsoft enough to have any opinions on them whatsoever.  When I started working with DOS/Win 3.1 at CDI, it seemed a little primitive compared to the Macintosh, but it was clearly miles ahead of a TRS-80 and to my mind, a computer was a computer.

I remember the first time I really got mad at Microsoft.  It was a little thing, in retrospect, but it nearly cost me a job.  I was early into my software development career.  I had been coding for about two years and I was working primarily in Powerbuilder and Visual Basic.  The web/Java market hadn’t really taken off yet.  I was working as a contractor at Mortenson Construction writing a time-tracking app for work crews and I had just moved from Visual Basic version 3 to version 4.  I had read about VB4, and some cool new features it was supposed to have for database manipulation.  I decided to build my app using the cool new features.  After weeks of development I ran into a major bug.  There was some code that just would not work no matter what I did even though it seemed like it was correct and there were no errors.  It was as if the method I was calling to do what I needed to do was pretending to work but silently failing.  Which made no sense.  I mean, why have a function available if it doesn’t do anything?  That would be insane.  So I read the documentation and re-read the documentation and tried over and over again to get my code to do what it needed to do and one day, with my manager getting more and more annoyed with my “wheel spinning” every day, I went back to the documentation for about the 45th time and this time I read it top to bottom and found a footnote, tiny print, that I had overlooked.  You know what it said?  It said that this feature hadn’t been complete in time for the release so, while the method was still there, IT DIDN’T DO ANYTHING.  The insane option, the “we shipped this product knowing it was impossible to use the major feature we promoted in all our marketing material” option, was the truth.  Microsoft had promoted, as the reason to move to version 4, a great new feature that they had then failed to actually finish building.  Instead of waiting until they finished the job, they put a footnote in the documentation telling unfortunate developers like me not to use the new functionality because it didn’t work.  This nearly cost me my job. I showed the whole thing to my manager and explained I would have to re-write the majority of the application.  All in all, their marketing first, functionality second, approach cost me a few months of wasted effort, gave me a serious black eye with my manager, and made me doubt every claim they ever made from then on.

Over-reaction?  Maybe.  But it was the first time I ever had an experience like that.  When you are a software developer there are so many thousands and thousands of methods and functions and libraries and language features that you could never store them all in your head.  One of the primary skills of a software developer is knowing how to navigate dense collections of API (application programming interface) documentation to find the information you need.  Building software is like assembling the mechanism of a complex watch but instead of gears and springs you work with ideas and words and logic.  The behavior of a statement in code like makeMeASandwich() needs to be trustworthy.  Let’s just say the documentation says “The function makeMeASandwich() takes the parameter SandwichType and returns an instance of the specified type of sandwich or the value ‘null’ if no ingredients are available for that sandwich”.  When you put makeMeASandwich(SandwichType.PB_AND_J) in your code and receive no sandwich, you assume that you are out of peanut butter and/or jelly and your code may then call the checkCupboard() and goShopping() methods.  What you do not assume is that the method is lazy and will always return null no matter how many ingredients are available because the company that wrote the documentation never bothered to write the method.  This is a betrayal of the highest order and was serious enough that I actually decided then and there I didn’t want to base my career on a company that would do that.  Again, maybe an over-reaction, but hey, I was 23 years old at the time and my brand loyalty was not that strong.  Yeah, I bought Windows 95 the first day it came out but it wasn’t like I thought it was cool.

I toyed around with other operating systems.  A friend gave me a book with a free operating system called Linux on a CD in the back and I messed around with that.  You wanna talk primitive, a Linux distro in 1997, now THAT was primitive.  I installed OS/2 Warp at one point, and it was fine, but it was clearly dead in the water.  Apple wasn’t really an option.  Jobs hadn’t returned yet, their stock was trading at $7 a share and people were taking bets on how long it would be before they just collapsed.  I looked around for options and all I found was that there was something wrong with every option I could see.  After that initial experience with VB4, I started to notice that Microsoft demonstrated over and over that they couldn’t be trusted.  I got into Java and web development because I hoped that one day platform-neutral and standards-based coding practices and technologies would take over (I was right, woot!).  Then I had my first Apple experience, and it was so the opposite of my Microsoft experience I was hooked.

It was simple enough.  The company I was working for (doing Java for the first time!) had bought a Mac at my insistence for the purpose of testing our web apps on the Mac platform.  Nobody knew what to do with the thing.  This was still pre-Jobs.  It was a PowerMac 7200 and they dumped it in my cubicle.  I didn’t know how to set it up or anything.  Hadn’t used a Mac since writing lab in high school and that was an 1980’s-era compact.  I managed to plug everything in thanks to the extremely simple documentation in the box.  I started it up and it smiled at me, which was friendly.  When I got to the desktop I realized I needed to get it on our network but didn’t know how.  Getting a computer on the Internet or a corporate network at this time in history was rather complicated and usually involved a lot of configuration.  It wasn’t like “pick the right WIFI network from the drop down”.  There was no WIFI yet.  Our network guys had no clue and figured it would take them “a few days” to figure it out.  So, I clicked on Help and typed “TCP/IP Network” and an amazing thing happened.  The Help turned out to be interactive.  A red circle was drawn around the place I was supposed to click and when I clicked there, another circle was drawn around the next spot.  In 30 seconds I had it on our corporate network and the Internet.  Mind.  Blown.  I started gravitating to using it when I didn’t really need to.  When Jobs came back and then the iMac appeared, I bought one.  I never really looked back…  Until now.

Now the world is very different.  Apple has conquered, commercially.  Platform neutral development platforms are everywhere.  The Win32 API exists but most people are writing apps for iPhones and Android phones and HTML5 and Microsoft is in the dog house and everybody hates Windows 8 and tons of people are still clinging to Windows XP.  The worm has turned.  Microsoft is now the underdog.  And I’ve always been a fan of the underdog and I’m discovering that I kinda want them to succeed.  I kinda want Windows 9 or whatever to be really great.  I kinda want Apple to get down off it’s $500/share price and it’s hipster bullshit advertising and it’s thinner and thinner and thinner devices that all seem to come with major limitations and restrictions on your freedom to use them and start to Think Different again.  I got what I wanted in 1997/98, but now I’m finding that I have slightly more fun with my MacBook when I boot it into Windows 8 instead of OSX Mavericks.  I have had three iPhones and two iPads and I’m kind of excited to have an excuse to get a Windows Phone just because it’s different.  Sometimes it’s fun to mix it up a bit.  It’s not as if Microsoft hasn’t been a major part of my life all these years.  Of course it has.  I have a home-build Windows PC in my recording studio and that thing has been upgraded and maintained for a decade or so.  I work on Windows every single day professionally.  The only Windows flavors I don’t think I’ve ever run are ME and Vista.  But one of the great things about Windows 8 is that I don’t use is every day professionally.  Companies are not rolling it out.  It’s a minority platform and it’s quirky and slightly buggy, and has some bizarre design decisions, kinda like Linux.  It is maybe not as good, but it’s a little more fun than the solid, staid, predictable, Mac.

And that’s where I sit today.  About to get in my car and spend they day doing the unthinkable…  porting my personal work to Microsoft’s mobile platform using a Mac that is booted up in Windows.  The event is called A Day of Unity because of the Unity game development engine but maybe it’s also a day of unity for former Microsoft users who turned to Apple to re-unite with the stupid marketing-driven company with the weird, stupid, technology.  Let ’em back into my life a little more, give them a chance to put some pressure on Apple.  Maybe something good will come out of it.