When something seems to bother me an inordinate amount, I like to see it as an opportunity for self-reflection.  I ask myself just why it is that the thing bothers me so much?  What nerve is it touching and why is that a nerve for me?

I remember my first brush with social media websites all the way back in the olden times of MySpace.  By the time MySpace launched, I had already been online in one way or another for a decade.  I had built a bunch of personal websites and I was proto-blogging at sites like LiveJournal.  Somebody told me about this new site and I checked it out and it felt…  off.  Like taking a sniff from a bottle of milk that is just beginning to sour.  I did not feel compelled and, in truth, I didn’t want to use the site because of that initial gut reaction.  It took milliseconds for my brain to construct a picture of a future in which people didn’t make their own quirky expressions of creativity on their own websites but rather they just dumped themselves into a pre-existing mold, a templated website that collected all the ephemeral human content into a nice pen where it could be commodified and corralled and monetized.  I am not retroactively crediting myself with more foresight than I actually possessed. I had been a technology professional for a decade, and an enthusiast before that.  I had read all sorts of books about future directions of networks and technology.  I had been on closed community silos like AOL and CompuServe before I ever even heard the word “internet”.  I knew what I was looking at the moment I saw it and I didn’t like it.  It seemed like a harbinger of the end of the wild wild web.

Which, of course, it was.

I did put a few songs up on MySpace, at the urgings of others, but I felt really irritated by the ask.  I didn’t want to be in a silo, my songs were already available on my own site, and it seemed like an imposition to have to participate in this new stupid thing or else risk being completely outside of the social sphere.

Then, of course, it all got even worse.  People started prodding me to join some new site called Facebook.  Which I did.  And I hated.  And I unsubscribed from immediately.  And then people prodded me again and I did, again.  And then I got a “poke” and I immediately unsubscribed again which of course didn’t last.  Everything about the core idea behind “social media” sites and apps felt like an attempt to corral us all together in order to advertise at us and turn us from free thinking, free range, homo sapiens into a manageable network of predictable marketing demographics.

Which, of course, it is.

It was clear from day one that this new paradigm of social silos was going to create a flood of change that would almost entirely erase the antediluvian world of self-hosted websites, GeoCities pages, quirky web forums, webrings, and nutball creativity that had flourished on the web prior to their arrival.  No more would Mahir kiss you, no more would there be another Zombo.com to fulfill your every dream, it was time to monetize, monetize, monetize the web and it’s webdom.

I hated this shift.

I hated it because it was anti-creative.

I hated it because it was addictive.

I hated it because it was bad for relationships and society at large.

I hated it because it enforced a grid of conformity on human expression.

I hated it because it was closed and proprietary.

I hated it because it was antithetical to the entire concept of the internet in which information wants to be free and standards of interoperability need to be OPEN.

Nothing that has happened in the ensuing decades has changed my mind in the slightest.  The tingling of my spidey senses the first time I saw MySpace have been absolutely confirmed in every horrifying detail.

The web is a wasteland of low traffic, mostly ignored, little watering holes like this blog here that sit outside in the lonely dark while the majority of humanity spends their waking moments funneling their photos, videos, comments, relationships, experiences, hopes, dreams, ambitions, and souls into a tiny handful of dopamine dispensing closed-silo apps that are designed to aggregate humanity into big piles for ease of commercial exploitation.

But that’s old news.  The new news is the new thing that is tingling my spidey senses again.  “AI”.

My spidey sense about AI has been ringing louder and louder for a couple of years now.  It’s not for the reasons every techno-utopian seems to want to talk about.  They talk about the fear of a global super-intelligence arising or us “losing control” of the tech, big science fiction fears, and they then tell us all about how this tech will actually solve all of our problems, solve climate change, let us live forever, blah blah blah.  The people developing this technology, pushing this technology, are talking about it in the same glowing terms that they have previously talked about crypto, smartphones, virtual reality, and all sorts of other tech.  In every single case, the technology has arrived, disrupted, been incorporated into our lives to one degree or another, and proceeded to deliver on about 10% of the amazing world-changing life improvement that was promised.  Remember when Siri or Alexa or “Hey Google” were going to change your lives in so many ways and everybody got a smart speaker and now the only thing the technology gets used for is to reply to a text message hands free while driving?  Would that change if they were smarter?  Would you have an in-car conversation with an LLM instead of listening to the radio?  Would that make your life better in any way?

The fact is that an LLM can have a human like conversation and it has a lot of information to back it up but it’s not an enjoyable conversation.  It is boring.  A generative image pooper can make 25 images of a pretty lady in the time it takes to type “make me 25 images of a pretty lady” but they are all boring.  A song generator can create three country songs that sound just like Shania Twain in response to a single prompt for a “make me a country western song about my cat” but they are all boring too.  It’s all boring because there is no “there” there.  If you like the song, can you go see the artist play live, learn about their lives, relate to their story?  Nope, there is no artist.  AI creations are hollow, they mean nothing.  They are not art, they are content.

Why on earth would I as a sentient being want to have a conversation with an echo box beyond (maybe) asking it for directions?  Why would I want to contemplate a machine-generated video or image?  There is no meaning there, no creative choices were made, no intentionality is expressed.  It can’t be beautiful even to the level that a child’s crayon drawing can be beautiful.  It can’t even be ugly in an interesting way.  It’s just pixels.

This is a shallow critique and I realize it is not really what’s bothering me, when I probe my own thoughts a bit deeper.  The core thing that’s bothering me is that generators are already teaching people to short-circuit the creative process and in so doing, removing about 99% of the value of creating to the person themselves, never mind the end product.

Here is what I mean by this.

Let’s just say I am feeling something.  I am sleeping badly.  I’m angry when I have no obvious reason to be.  I’m sad and I don’t know why.  I sit down to write about it with the hope that by doing so I might be able to put into words what I am feeling.  I journal, I think, I take a break after 20 minutes and go sit and drink a cup of coffee and stare out the window, pet my dog, meditate, eat an apple.  An hour later my thoughts have crystalized a bit.  My first thoughts have been flushed and my second thoughts have come to the fore.  I have discovered a way to say what I’m feeling and I have written something in the process, a creative piece, but I don’t share it with anybody, no matter how beautifully written it may be.  The point of the exercise was the personal process.  It wasn’t about output, it was about personal exploration.  The technology required to go through this process?  A notepad.  A writing utensil.  A cup of coffee.  An apple.  A canine.  People have done this for millennia.  This is creativity even if nobody ever reads it.  This is a practice, a process.

That night I go to sleep and my mind processes the days inputs during REM sleep, I have vivid dreams in which pieces of my past and present intermingle in unexpected ways.  When the alarm clock goes off, I’m engaged in a conversation that I don’t want to leave.  Something important is about to be said.  But the dream dissipates after the second hit of the snooze button and I wake up to feed the dog and make a pot of coffee.  Images from the dream linger in my mind, snatches of words, I head straight to the notepad again and write it down before it burns off like morning fog.  As I write the words start to transform from prose into poetry.  Pretty soon I’m writing metaphorically, I’m making allusions, I’m finding something new to say that I didn’t even know I wanted to say.  My subconscious processes have combined with my creative practice and now new perspectives are being found, I’m thinking laterally, I’m less sad, and something is emerging.  It has a sound, it has a color, it has a shape, it has a smell.

Then, snap, I hear a symphony in my mind.  There is a song, the words are there, I’m plugged back into my subconscious, the process, the practice, the persistence, they have led me to a creative moment that feels like it comes from somewhere out in the sky, like I’m channeling something, I’m just writing it down.  The lyrics and melody and structure of the song are all there, I’m just transcribing them.  I write the last line of the last verse and I sit back feeling giddy and a little high although I never managed to get to the coffee cup.  It feels like magic.  It feels supernatural.  I can understand why people believe in god.

After this magical moment, I have a choice.  I can stop there.  I can keep the art to myself, hum the song when I want, it’s mine, it’s personal, it’s uniquely a product of my experiences, my practice, my process, my brain, my feelings, my heart.  I could keep it, nothing wrong with that.  But, I also have the option to share it.  I could take the time to create a representation, to polish the rough edges, to refine the words, maybe expand the song structure, spend hours, days, weeks, crafting a representation of the song so I can share it with others in case, in so doing, those other people will resonate with it.  That takes a lot of work.  Technical work.  Craft work.  Artisan work.  But it’s also social work.  Maybe the song is beyond my ability to play.  Maybe I hear a violin part and I don’t play the violin so I have to involve a friend who plays violin.  I need a drummer and a piano player, I have friends who do that.  We spend time in the studio together, we collaborate on the song.  They resonate with it and bring some of their own perspectives, their own thoughts and feelings, their own musical riffs and ideas.  As the song is born, multiple voices are brought in, it connects minds and hearts in the very act of crafting the work.

I am months beyond that initial restless feeling at this point.  I am now sitting in a recording studio with a bass guitar trying to get through a couple of takes without any flubs and listening to the song via playback and, between takes, I am suddenly hit by just how COOL this is.  How something I was feeling that I didn’t even have a name for was now this THING that didn’t exist before and this THING is not just the resulting 5 minutes of audio, it’s everything involved in getting to this point.  The journaling, the dreaming, the moment of inspiration, the choice to share, the crafting, the collaborating, and then, at the end there are two things.  There is a creative journey and there is a song.  When I listen to the song, I relive the journey.  During the journey, I have grown as a person.

Art isn’t merely the song.  Art is also the process that leads to the song.  Art is the practice of introspection, the use of creative tools of expression as tools to explore experiences, and the continual commitment to personal exploration and growth.  The song is the tip of a very large iceberg that the listener never sees but it is the process of living with an artistic practice, writing, painting, music, whichever language the artist uses, that enriches the life of the artist.

Let’s now compare this experience, one I have had countless times over the course of my creative life, and compare that to “AI” based “creativity”.

Let’s just say I am feeling something.  I am sleeping badly.  I’m angry when I have no obvious reason to be.  I’m sad and I don’t know why.  I try to use an LLM-based therapy bot app which gives me some emulated empathy and regurgitated and remixed self-help information and suggestions but I feel pretty much the same and I’m no closer to understanding why or transforming those emotions into anything.  I decide to write about it on my computer and the AI assistant starts suggesting what it thinks I might want to say, rewriting my raw thoughts into something “better” but it no longer sounds like me and the whole exercise is getting me no closer to any sense of self-discovery.  I’m being course-corrected and guided towards the statistical norm, pushed to the hump of the bell curve by an inscrutable algorithm that is trained on the collectively homogenized writing of every text every human has published online.  I give up and spend an hour doomscrolling.

I sleep badly, I can’t remember if I had any dreams or not.  I feel like shit the next morning.  I heard about this cool new AI music generator while doomscrolling before bed.  I install the app and I type in “make an angry song about being confused about my life” and it generates three options.  The lyrics are angry.  The songs sound like a cover band that are playing familiar songs from faulty memories, accidentally morphing them into new songs that seem oddly familiar although they are not exactly bangers.  Still, it’s amusing, for a minute.  I am impressed by how “realistic” the result is.  I click regenerate a few times to hear the variants until I like one a little better than the others and I pat myself on the back for “creating a song”.  I click a button that shares it with other users of the platform.  I go pour myself a bowl of cereal, I’m still angry, two days from now I will forget this ever happened and I will still feel like shit.

I have successfully avoided the journey, I have made a “professional” sounding song without growing, without crafting, without any personal benefit.  It’s like showing up at the trailhead of a 2000 mile hiking trail, taking a selfie with the sign, and then taking a helicopter to the end of the trail and taking a selfie with the other sign as a method to experience the trail.  Is it faster?  Sure.  Does it serve you in the same way?  Absolutely not.

This is the concern that really gets to me.  I worry about people taking this shortcut because it’s so ubiquitous, so pervasive, that it never occurs to them that they are shorting themselves, stunting their own growth.  Creative practice deepens your understanding of yourself. Creative collaboration creates powerful interpersonal connections. Being “bad” at writing, painting, playing the guitar, singing, sculpting, or poetry is not a sin that needs to be “corrected” by a computer, it’s merely a stage in learning.  Some of the best art in terms of humanity, relatability, and resonance is raw, unpolished, unprofessional, voices cracking, colors blurry, message unclear.  When a pitch corrector “fixes” my singing, it’s no longer really my voice.  When an LLM “fixes” my prose, they are no longer really my words.  When an image or audio generator creates, from whole cloth, the thing I ask for from a prompt, none of that is actually me.  No wonder it feels beige, benign, hollow, dull, polished but pointless.  If this is the way of the future, people taking shortcuts to create digital artifacts that are shiny but vapid, the artistic equivalent of cotton candy, and real creative process is considered to be too hard, too slow, too cumbersome, and too inefficient, well, that’s just a fucking tragedy.  For the creators themselves.  I fail to see how it is possible to reap the benefits of creative work if you don’t actually do any creative WORK.

My advice to anybody who thinks they want to be creative is to be very mindful about how/if you make use of these tools because you might find that they become a barrier to actual creativity, a substance-free substitute for being an artist, finding your own voice, and inhabiting a creative process.

I’m honestly struggling to see an upside to generative LLM-based technologies.  The further we get from living in real space with each other, working in real space with each other, and interacting in real time with each other, the lonelier we get, the sadder we get, the more disconnected and fragile we feel.  Now tech companies are going to augment this reality with these digital simulacrums of intelligence that try to trick us into feeling less alone and give us the ability to “create” without reaping any of the benefits of creating.  The obvious beneficiaries are the companies running the server farms that run the code that powers these “AI” products and the companies that sell information about us to each other so they can sell products to us.  Our human experience, our quality of life, our depth of personal understanding?  These are necessary grist for the mills of the algorithms but they are also being starved by the very technologies that rely on them.

We are already seeing the beginning of a sort of “AI” Ouroboros, with new models being trained on the output from previous models, trending towards a polished mediocrity, a sort of bland vanilla soft-serve of images, audio, video, and text that has no ability to inspire, to infuriate, or to improve us.  Actual humans must continue to sit with actual feelings and do actual creative practice.  They must share this with each other in real life, in real space, in the real world, in real time.  Actual creativity must continue, and it will, because humans are awesome.

My prediction: the trash flood that happened in the wake of the rise of social media was NOTHING like the trash flood that is coming for us now with this tech.  Pointless “art”, fake news and misinformation, the end of the internet being enjoyable in any way, shape, or form, integrated LLM bullshit in every tech product that cannot be disabled, and a never ending temptation to short circuit the creative process to get that sweet dopamine hit without doing all that pesky personal growth.

The 10% that is good that will come from this tech?  Smarter GPS route guidance.  Occasional useful suggestions when doing advanced technical tasks with lots of details (like writing software, for example).  Deeper understandings of how biochemistry works.  Better real time language translation between people who speak different languages.  There are, clearly, some very useful and helpful applications of this technology, but that’s not what is happening.  That’s not what is going to make big money for big tech companies.  They want pervasive “AI” everywhere because they have a lot of spare server cycles and stockholders to please. They want it to serve them and their commercial interests.

In his book “Understanding Media: The Extensions of Man” Marshall McLuhan argued that “the media is the message”.  In other words, that it is perhaps more important to focus on the medium by which something is expressed than on the message itself.  If the message is “Hello, Bob” and it is verbally uttered face to face that is different than if it were communicated via skywriting, or a letter in the mail, or an email, or broadcast on television, or by tattooing it on a body part.  The message is the same, the media is different, and the choice of media provides the all important context that makes the words mean whatever they mean to Bob.  The media is, in fact, the message.  If most of the messages in the future will be delivered via LLM/GA, those messages will feel hollow, untrustworthy, soulless, empty, bland, boring, and lazy.

My spidey sense says that this is going to be a net negative for our species and our general enjoyment of life.

I just watched this:



I will one say that one person’s “amazing” and “hopeful” future is another person’s dystopian hellscape and this, to me, is a horror show. Nearly everything about “AI” sucks and I hate it because it’s already making things worse. There is more misinformation, more confusion, less creativity and thinking, more reliance on giant pirating remix engines instead of developing personal skills and intelligence. It’s not fear of Skynet happening that I find so awful, we’re still a millennium away from sentience, it’s the absolute shit show of what is happening with this technology right now. Namely, the marketing hype that is trying to sell us all on the idea that ChatGPT and generative image poopers are some sort of intelligence and that we need/want to have this intelligence embedded in every piece of technology we ever interact with. What the “AI” people have actually created are computationally massive software algorithms that can mimic human behaviors well enough (by stealing and remixing actual human creativity) that they can give the illusion of intelligence, sometimes, as long as you don’t examine them too closely. The AI techno-utopians have created the world’s most expensive and well-informed blind and dispassionate sociopaths. Code that is possessed of no awareness, no mind, no thoughts which is already enshittifying everything it touches. This is easily my least favorite technology development of all time.  It’s auto-tune for your brain and what could be worse than that?

Got yer answer right here:

A.I. is having a moment.  Everywhere I look these days it seems that people are embedding the former Philadelphia 76er and Hall of Fame point guard into their products and…  wait…  just being corrected here…  A.I. refers to “artificial intelligence”.  Ah.  That makes a bit more sense…

There are a lot of hot takes happening.  Is AI going to steal all of our jobs?  Is it good?  Is it bad?  Is it going to permanently erase any lingering traces of trust from the shattered remnants of human society?

To answer:

  • not all of them
  • yes
  • yes
  • if you have EVER trusted ANYTHING digital, let this be a lesson to you to STOP IT

Look.  I write software for a living.  I have a bit of a grasp on how the trick is being done where the alleged “intelligence” of this technology is concerned.  There is nothing even remotely resembling intelligence in these tools other than the intelligence of the humans who create them.  They are indeed very sophisticated pattern matching and generating algorithms and they are useful for lots of things in the same way that wrenches and hammers are, but they have no goal, no intention, no awareness, no concept of mind, they just regurgitate the digitized human thoughts, feelings, writing, and creativity that belong to the massive datasets they are fed.  It’s an impressive trick.   It’s a kinda useful one sometimes, but does it really merit all of this gold rush behavior?

Here’s what an AI can do for me in my life right now:

  • suggest courses of action, code to write, or text prompts that are utterly useless about 80% of the time, close to useful maybe 20% of the time and require my human judgement, intention, action, and decision making 100% of the time
  • generate shiny but ultimately kinda crappy artwork with no soul or humanity in it that is stolen from massive numbers of actual artists without compensation, recognition, or acknowledgement
  • get trained on things that never happened, people that never existed, places that are fictional, and all the rest of human fiction and then mix all of the above into search results as if they are reality

I don’t even think it’s fair to call these “artificial intelligences” so I will refer to the technology by the more accurate term “generative algorithms”.  GAs are useful tools, when paired up with human cognition, to allow you to create images if you suck at drawing, or write slightly less horrible prose.  OK. I predict that GAs will kill or at least probably make a serious dent in the clip art / stock photography industry… why pay Shutterstock when a GA will poop out 20 images of a kitten and one of them might look good in your PowerPoint deck?

The rest of us can currently be marked safe from the GA apocalypse because these tools don’t do ANYTHING well enough to be trusted.

At first I thought they might be a nice search helper until I realized that fact and fiction are indistinguishable to a GA.  After my hundredth inaccurate search result I disabled the “help”.  (Fortunately I already knew the difference between the actual Scottish “Stone of Scone” and the Terry Pratchett Discworld “Scone of Stone”…  GAs SUCK at identifying parody and satire…  so do many humans so that is why I don’t see this problem disappearing any time soon…)

I added a GA ride along helper to my coding environment at work and it is like a slightly more useful auto-complete (a feature I have had for, like, 20 years…) and probably contributes a few minutes of saved typing every day but it often suggests absolutely, wildly, inaccurate things.  It hallucinates code structures that don’t exist, it sometimes autocompletes huge chunks that I then have to undo.  Some days I am glad it’s there and some days I consider turning it off.

I’ve been impressed by some of the “sound alike” music that has been done via GA.  “Look here!  It’s a song that sounds like The Beatles!”.  Cool?  I mean, I don’t doubt that I could probably use a GA to manufacture my entire next album by training it on all my previous recordings but I mean…  Doesn’t that defeat the whole point of self-expression?

And this, my friends, is why I remain radically unimpressed by this tech.  “AI” is really just GA, and as such is only capable of mixing all the colors of the human creative intelligence into a sort of bland beige that, like pitch corrected vocals, has a tendency to make everything kinda feel and sound and look the same in a way.  It synthesizes information to create something that feels, well…  synthesized.  There is no wabi-sabi, none of the beauty of imperfection or quirkiness that comes from the works of people.  I’ve generated tons of images with GA tools, had my talks with the chatterbots, played with the tech for years now, and I have yet to have any experiences that feel like a legitimate improvement over how I did things prior to their arrival beyond the occasional bit of small time saving.  When I’ve gotten images or other output from a GA that met my needs, there was no satisfaction in it.  I didn’t consider it to be a creative act.  There was no sense of accomplishment, just the sense of acquisition.

Just as I typed that sentence I realized that sums it up nicely.  An analogy:

I love coffee.  I own many coffee mugs.  If I were to take a pottery making class and make myself a coffee mug, firing it in a kiln, glazing it in the raku style, taking the white hot mug from the kiln and dropping it into a bucket of sawdust, and marveling over the metallic finish of the final product, noting my own fingerprint still left accidentally on the bottom, I would not just have a coffee mug.  I would have a memory.  An experience.  A sense of accomplishment.  And also a coffee mug.  Alternately, I could find myself at HomeGoods and see a very nice handmade raku coffee mug and buy it and bring it home and put it in my cupboard.

This is what it feels like to make digital artifacts such as text, images, or audio using generative algorithms, it’s more akin to shopping than it is to creativity.  It’s like paging through all the Amazon products, trying to find just the right mug that looks like the one in your head, clicking Add to Cart, and two days later opening a box, and putting the product in your cupboard.  These GAs are there to turn tasks that were once creative into tasks that are now mostly indistinguishable from acquisition and purchasing.  That’s not necessarily a good or a bad thing, but it does limit the appeal for me, personally.

As to whether this sort of “buying yourself a voice/image/face/video/music” thing is ultimately bad for society…  That depends.  If people keep mistaking GAs for intelligence, continue to uncritically trust and share online information, and keep putting their trust in authoritarians, demagogues, propagandistic “news” outlets, and shit they see on social media, we are probably collectively going to struggle but hey, since when have people been gullible, stupid, or fanatical?  I really can’t see any way this technology could be abused.**

** This is one of those sarcasm things that GAs suck at, so, hey Bing or whichever GA-enhanced monstrosity is indexing this page, I meant the opposite…  Got it?  Or did I?

As a bit of a follow up to my mea culpa regarding the Apple M-series MacBooks and how impressed I am with them, I have taken the plunge and purchased my first new Apple product since the iPhone 7, an iPad Pro with M2 processor.  The question: can this serve as a legitimate laptop replacement or, at the very least, a replacement for the majority of daily laptop tasks.

I have only had the new iPad for a few days but here are my first experiences.

Before I proceed, if you’re unfamiliar with me, I am not somebody who just converted to Apple products.  I am a long time Mac user going back to the 90’s, an owner of every iPhone from the 3g to the 7 (although when they took away the headphone jack I switched to Android because I don’t much care for phones in the first place but I do love my wired headphones…), and I’ve owned three previous iPads.  I work in software engineering and media production so my general daily use cases involve music production, video editing, graphic design for print and the web, and coding software.

The temptation to try to replace my daily driver laptop with an iPad is strong.  I have been trying to get there for a while now.

Way back in the ancient times (around 2012) I bought a 14-inch MacBook Pro and an iPad 2(?) to replace my previous iPad 1.  I liked the iPad form factor quite a bit and I even started to write some software for the platform but I found that I was not thrilled with a few realities of the situation.  Firstly, while I could write software FOR the iPad, I could not write software for the iPad USING the iPad.  So, it was more like a phone than a laptop.  It couldn’t be used to modify itself in the way a proper computer can.  You still needed a Mac.  Secondly, the Mac lacked any sort of touchscreen support, so, writing software for the iPad and its touchscreen interface was clunky as hell.  You had to run an iPad emulator on the Mac, which sorta worked, but you had no way to really emulate gestures and multi-touch in the emulator.  There were plenty of other limitations too, mostly related to file management.  If, for example, I wanted to edit a video on an iPad with footage I shot using my DSLR, iCloud was a non-starter as a file sharing technology.  Basically, email and web browsing and digital comic books were great on the iPad, but for any serious work, you needed a Mac.

That situation didn’t really change in the ensuing decade.  iPads got more capable and back in 2016 I upgraded to a Pro with a Pencil, I added a keyboard case to it, it was a nicer iPad experience, and some really kickass music production apps on the iPad became part of my life, but I still couldn’t use it to code, I still wouldn’t consider it for any sort of serious media production work if for no other reason than the lightning port didn’t really allow it to dock with external devices like large capacity file storage or an external monitor or a high quality A/D converter (or, ideally, all of those things at once).  Also, while multi-touch is great, some tasks/apps are more efficient with a keyboard and mouse/trackpad.  Finally, laptops were just faster, had more memory, and had more storage.   If you have access to an actual computer, why on earth would you limit yourself to the restrictions of a tablet?

But I dreamed of a world where I could have the touch/tablet experience so I experimented with Windows machines.  Specifically, I acquired a Lenovo Yoga convertible laptop.  Now, Windows 10/11 are not quite as polished as MacOS for traditional computer work and they are miles behind iOS or iPadOs in terms of touch support, but the Yogas are solid machines.  As fast, or faster, than the Intel MacBook options of the time, thin and light like a MacBook Air, a little more expandable and upgradeable, and the tablet experience was actually pretty solid, if maybe a tad too big.  My Yoga is a 14-inch so it’s a pretty mega tablet when it’s in tablet mode, but it’s been a great machine and I still use it daily.  In the 12 years since I bought my last MacBook (a 2012 machine which I still have and use) I have bought TWO Yogas.  Both times I considered a Mac purchase and chose the Lenovo and I have no regrets.

But I haven’t been in love.

I used to LOVE my Apple gear.  I still LOVE my older Apple products but I don’t have any particular affection for the Yoga experience.  It’s just decent.  Serviceable.  It gets the job done, the form factor is convenient.  I wish it ran MacOs.  I wish the MacBook Air could flip over and have a touchscreen but this is as close as it seems I can get, it’s just a second-tier operating system experience.

But the Yoga has some serious benefits.  It is a full scale computer, it docks with peripherals, it runs all the apps, it’s fast, it’s an all-in-one.  If I take a road trip it’s the only thing I need to bring along and I don’t need dongles to use it.

At the end of the day, my laptop that converts to a tablet is sub-par because of software but at least it’s self-contained, a true 2 in 1.  On the Apple side of the world I have two very separate devices, each of which has some exclusive capabilities the other lacks, creating a situation where I have to chose one, or the other, or to use two devices where one could have sufficed.  I’ve long felt this was a pretty shady product strategy on Apple’s part, an intentional choice to sell more devices, and I’ve long resented that.

Hence the desire to find out if the current M2 powered iPad Pro can finally be a proper 2-in-1 device.  Could it replace my ancient MBP for all the things I still use that machine for?

Let’s start with the positives.  The iPad Pro M2 is the fastest computer in my house, beating out the laptop I use for work (a MacBook Pro M1) in terms of shear horsepower.  It’s wicked fast and uses very little power, just like the M1.  It’s orders of magnitude faster than my old Intel-powered MBP.  It’s not even close.  However, the two machines are really not all that different when it comes to things like web browsing, email, and basic productivity apps.  In fact, the M2 experience is almost wasted on standard apps like that.  My first gen iPad Pro still has more than enough horsepower to handle all of that stuff with ease.  My Intel MacBook too.  Hell, I am even running the latest version of macOS on the old Intel machine (via some open source hackery) so I’m really missing nothing where the normal stuff is concerned.  If that were all I needed my computer to do, the M2 would be the sprightliest of the three but it wouldn’t be enough to make me want to upgrade to it.  In fact, the only reasons I did so now were:

– A CostCo Rewards certificate
– A CostCo sale on iPads
– A failing battery on my 8-year-old iPad Pro

I.E. – it didn’t cost all the much out of pocket and my current tablet was suffering some LI-ION aging.  I didn’t buy it because I was suffering from poor performance or lagginess.

I outfitted my M2 with two accessories: a Logitech Combo Touch keyboard/touchpad case and (of course) an Apple Pencil 2.  I want to talk about how those two additions do seem to turn the thing into (very nearly) the laptop/tablet 2-in-1 I am hoping for.

The Logitech Combo Touch is what really makes it feel like a laptop.  If you’ve ever used a Microsoft Surface, you know the drill.  A detachable magnetic keyboard/touchpad, a kickstand to hold the tablet up, not as friendly for actual lap usage as a laptop, but if you use a lap desk it’s not so bad.  It really is just fine.  I’ve had other keyboard cases for previous iPads but I’ve never had a touchpad before and it changes the game, makes it into an 11-inch laptop for all intents and purposes.  The ergonomics and muscle memory are what I’m used to.  The keyboard shortcuts I have been using macOS for the last 25 years generally translate to iPadOS, honestly it’s damn close to using a MBP.  The keyboard feel on this Combo Touch is pretty impressive. The keys don’t feel mushy, the keyboard doesn’t feel all that cramped.  I can legitimately write long form text without wishing I was on a bigger machine.  Yes, it’s a bit compact but it’s really quite good.

There are a few annoyances that I’ve noticed.  The kickstand approach to keeping the tablet upright is definitely a drawback compared to the fantastic Logitech Create case I have for my older iPad Pro.  In the case of the Create, however, the lap stability is created via moving the keyboard forward and magnetically securing the tablet:

Unlike the Combo Touch:

As you can see, the lap depth of the Combo Touch makes it a strictly lap desk/table top situation while the Create could be used on-lap.

This is down to the absence of a hinge, one of the more impressive parts of the Yoga design.

Of course, I could have gone with an Apple Magic Keyboard case instead which would have addressed this shortcoming because it features a very cool cantilevered hinge design and basically turns the iPad into something even close to a laptop but I didn’t for two reasons:

1) The Magic Keyboard is hella expensive (more than half of what I paid for the iPad itself)

2) The Magic Keyboard is bulkier and heavier than the Combo Touch

3) Did I mention that the thing costs $299????  The fuck, Apple….

Maybe someday I’ll buy a refurb/used/sub-$100 Magic Keyboard but for now, um, no.

OK, so, it’s “laptop like” with the Combo Touch and, if I was willing to shell out some more money for a Magic Keyboard OR if I was willing to eschew the trackpad I could get a bit closer but I have chosen my weapons and here I stand.  Let’s talk Pencil.

I’m a huge fan of the Apple Pencil.  I may not show my work here on this site, but I draw and paint and when I’m taking notes during meetings at work or designing and planning things, I’m a pencil and paper (or fountain pen and paper) guy.  I used to regularly use a Wacom Artpad with my various Mac computers for creative work and I have a stylus for my Yoga and other styli for touchscreens but frankly, nothing compares to the Pencil.  It’s really one of the greatest things Apple has ever made.  It feels right, it works right, it’s excellent, but there have always been some annoyances that have kept it from being perfect.

Annoyance the first: charging the Pencil is stupid.  A magnetic cap (which you could easily lose) covers a lightning connector which charges by either sticking out of the side of your iPad waiting to be snapped off by a nearby dog or connecting to a stupid little dongle (which you could easily lose) that converts the male plug to a female plug and allows you to connect it via a lightning cable.  It’s just….  DUMB.

Annoyance the second: the Pencil lacks an eraser.  Dumb, I know, but they should have called it the Pen maybe?

Annoyance the third: Storage.  The Logitech Create case solved the storage issue by including a little Pencil storage loop, which was nice, but sans a case-provided solution, keeping your Pencil on you was a bit of a problem.

I wish I could say that the Yoga stylus was a better solution or at least equivalent in terms of functionality or annoyances but the less said about it the better.  The one I have isn’t rechargeable (it uses one AAAA battery and one tiny little coin cell battery, both of which are usually dead when I go to use the stylus), has no storage option at all, and barely works.  A truly pointless accessory and it sucks because one of the truly defining use cases of a tablet is DRAWING.  I never, ever, ever, find myself using my Yoga in tablet mode for drawing or note taking.  It’s just not worth the hassle.

Finally, let’s talk about drawing tablets, the only solution for a MacBook.  I mean….  Hey, when I was in middle school and we had a Koala Pad with a stylus that let us draw lines on the screen of the Apple II I thought that was pretty keen, but being able to draw directly on the screen like it’s natural media is just, DUH.  It’s how the mind and body work when using a writing instrument.  You don’t draw over on the desk to your right and watch what you draw on the piece of paper to your left when you take out a sketchpad and start sketching.

The Apple Pencil experience on the iPad is the single best drawing/handwriting experience that I have ever had with a computing device.  I still prefer using a Parker “51” fountain pen and a piece of actual paper for note taking and exploratory writing and song lyrics, but it’s a really wonderful experience and it’s one of the primary reasons I bought my iPad Pro in the first place.

So, this leads me to a minor confession.  I don’t have the Apple Pencil 2 yet, I only have my original Apple Pencil BUT I discovered that there was a little Apple Pencil -> USBC dongle that could be obtained for $9.00 so no problem, I figured I could just use that and I’d be all good to go in Pencil land.  Ordered the dongle (unhappily, as described above) and learned almost immediately that it’s no good.  The latest and greatest iPad Pro generation is only compatible with the 2nd Gen Pencil.  Oh bother.  The good news is that Amazon refunded my $9.00 and didn’t even make me return the adapter.  So, yay?

I won’t be getting my hands on the Pencil 2 for another day or two but I already know I love the Pencil and I also know that it addresses a couple of the above annoyances.  While still not delivering an eraser function on the rounded end, the charging and storage are both neatly addressed by a magnetic wireless charging system that attaches the Pencil to the iPad and keeps it fed with yummy electrons.  I think I’ll be using the Pencil a LOT more now that this is the case.

In summary on the positives side:

– The form factor can be made very laptop-ish and generally usable
– The performance is incredible for basic productivity and even for serious computing tasks
– The actual tablet experience is superior to the Windows 2-in-1 equivalent thanks to a superior multi-touch operating system and a stylus that is second to none

Now let’s talk negatives, because they exist and they are some of the same ones that have been there all along.

A mentioned above, the form factor is a big part of the conversation here.  A 2-in-1 device needs to be a great tablet and a great laptop.  The iPad is the platonic ideal of a tablet and a so-so laptop (without spending a shit ton of money) while the Yoga is an excellent laptop and a so-so tablet with no additional modifications required (or possible).  Something of a wash.  iPad wins as a tablet, Yoga wins as a laptop.

So it is then down to software and here the problems arise because, I’m sorry to say, that’s still the rub.

If I were a person who could meet all of my software needs via a web browser and an email client (true for many) it would be end of story but I can’t.  Hell, for a lot of people the only thing they might add to that list would be games and with an 8-core CPU and a 10-core GPU this thing is probably pretty good for games, I dunno, I haven’t tried it yet.  Most of my gaming is accomplished via the Windows gaming PC I built which has a fancy Nvidia card and VR headset and a trillion Steam games.  When I want to play on my laptop I just use the Steam Link app and play remote over my home network.  Don’t need much processing power for that.  I do  have a free three month trial of Apple Arcade available with my purchase so maybe I’ll do a gaming test as a follow up but it’s not all that important to me.  If I can pair my XBoxOne or my Nimbus Steel controller to the thing and play some games that will be plenty cool and I presume that is something that’s possible.  If so, the hardware is likely faster than my Ryzen/Nvidia VR beast.  So, again, games would likely not be a major obstacle to laptop replacement for many casual gamers.

But what about my other uses?  What about video editing, photography, audio production, graphic design, and (the big one) software development?

Therein lies the rub.

Let’s start with video editing.

Long ago in a galaxy far far away I used to do quite a bit of digital filmmaking.  Music videos, short films, I was even considering a feature.  I do a lot less of it at the moment (my most recent productions were a few music videos and a short film collection) but it’s something I intend to go back into with a vengeance as I launch the new Nuclear Gopher YouTube channel.  To gear up for this I have re-evaluated my choices in video editing software and come down on the side of Davinci Resolve.  It’s not completely necessary to know this, but I started my digital filmmaking life using analog capture cards and Adobe Premiere back in the 90’s, moved to DV->Firewire Macs in the early 2000’s with iMovie and then graduated to Final Cut Pro about, oh, 20 years ago.  When I got the Yoga I toyed around with Vegas as a video editing platform because it kept coming up on Humble Bundle sales so it was cheap as hell and claimed to be pretty professional grade.  I tried, on multiple occasions and with multiple versions of Vegas, to make videos but it was so buggy and flaky that I just couldn’t ever see myself using it for long.  I will never subscribe to Adobe and FCP will never be available on Windows so was I ever happy with Davinci Resolve came up on my radar.  Holy shit I love Resolve.

Having just completed my first large Resolve video project (“Nuclear Gopher: The Disorganization Behind The Name”, a two-hour short film retrospective that went direct to DVD and VHS for some reason… no streaming) I gotta say I am SOLD.  Final Cut is dead to me.  So, imagine my happiness to discover that Resolve exists on the iPad!  Rock.  And.  Roll.  Final Cut Pro also has an iPad version but Resolve is:

a) Free (FCP for iPad is subscription-ware, blech!)
b) Interoperable with the Win and Mac versions (allegedly)

The iPad version of Resolve is a subset of the full version but a pretty good start from what I can tell so far.  I won’t even glance sideways at FCP for iPad unless I hit some sort of wall in Davinci that I can’t see my way clear of.

But, editing the video is not the only thing required sometimes.  Sometimes I’m working with old, low-resolution, standard-definition footage and I want to bring it up to modern resolutions without it looking even more like crap than it already does.  I have been a regular user of Topaz Video AI for a while now for upscaling old footage.  When I was working on NG:TDBTH I was working with digitized 8mm footage, VHS footage, Digital-8 footage, ripped DVD video, and Mini-DV footage.  I used Topaz to bring all the footage up to a 1080p HD level prior to importing it into my Resolve timeline.  I bring this up because the upscaling process was easily the most processor intensive part of the project and the performance differential between the machines at my disposal was a serious factor.  Wanna stress test a computer, do AI upscaling of long video clips.

It went something like this for upscaling the same video clip (rough guideline, I did some bake offs but I didn’t keep track of the numbers):

Elderly Intel MacBook Pro: several hours
Slightly less elderly and more powerful Intel iMac: maybe 30 minutes less than the MBP but still hours
Much newer and more powerful Intel Lenovo Yoga: about twice as fast as the old MBP but still pushing an hour
Gaming PC with fast Ryzen 7 and Nvidia GPU: less than 20 minutes
MacBook Pro M1: less than 10 minutes

This more than anything was the experience that convinced me of the massive performance leap the M1 MacBook had made.  It was both a) demonstrably the fastest computer in the house for the same exact task with the same app and b) a power-sipping portable machine rather than a full sized desktop box that has a big ass power brick.

And the thing is…  the M2 iPad is even faster, at least on paper.

I don’t doubt for a moment that the iPad could beat the MacBook at the same computing task, even if only be a few seconds but I can’t do the test because the application is available for Windows, and Mac, but not iPadOS.  This is true of a lot of professional grade media production applications and it’s easy to see why.  To make a version of an application that works on an iPad means you need to factor in limited device storage (mine only has 128GB of storage compared to the 1TB in the laptops and the dozens of TB or storage attached to my desktops), limited RAM (it’s 8GB, from what I understand, compared to 16GB minimum on my other machines), and the need to redesign the UI to optimize it for touch because you could never be sure a user would have a keyboard/trackpad attached.  And the market for such software is very small.  Professionals don’t try to do things like that with iPads, historically speaking.  Could change, sure, but as of now?

So it’s a little frustrating to have this blazing processor in a device that can’t run a piece of software that really needs that kind of power because Apple decided to fork the Mac operating system into iOS and iPadOS instead of aiming for a unified platform and, lest we forget, the iPad is a locked down platform that can ONLY run software from the Apple App Store so there isn’t really any incentive or option to load something of your own on there, is there?  The very fact that we call it “side loading” when we put applications or data on our devices using methods other than the Officially Approved Outlets is kinda embarrassing.  For decades you bought a computer and installed whatever the hell you wanted on the thing and used whatever files you felt like.  Now you buy a computer but it’s called a “device” because you can’t.  The iPad can play at being a proper computer but the App Store reality means it’s still a “device”.  A separate computer remains necessary for this, and other tasks, at the moment.

But, HD video upscaling is, admittedly, a corner case even for me.  If I can do most of my video editing on the iPad, that will be groovy.  I haven’t tested it yet but I have no reason to doubt that I can work with a large external monitor, I know I can use a 4TB Thunderbolt drive that I have for storage, and if the back and forth of project between iPad and Mac/Win actually works for Resolve, I will be psyched.

On to music.  There are a TON of fantastic iPad-exclusive music apps that I have used for years.  I even find the iPad version of Garageband to be preferable to the Mac version for working on casual demos or even recording whole songs.  That said, my primary DAW of choice is not Garageband, not Logic, it’s Reaper.  Wherever a music project starts, it ends up in Reaper.  Reaper, like Resolve, is available in highly compatible Windows and Mac versions, and that factors into my choice of DAW.  The Windows PC in my recording studio is connected to a 16 channel audio interface and when I’m recording something with a lot of tracks (like, I dunno, the cast commentary track for the movie I recently tracked with five other people or a drum kit with eight microphones on it) that’s a nice rig.  When I want to take the session to the kitchen and headphones for editing, I can put it on my Yoga or my MacBook depending on my mood.  I checked if there was an iPad version of Reaper and the answer is, no.  No there is not.

I’ve used the iPad as a Reaper remote control for ages, so it’s a nice accessory in the studio when recording oneself (being your own engineer is it’s own art…) but that’s not the same.  This is OK though.  There are plenty of iPad DAWs available and as long as the tracks can be exported to stems and imported into another DAW, it’s perfectly fine to use a different app for tracking, even the excellent Garageband.  The question I still have is whether or not I can use one of my USB audio interfaces with it, however.  I have a Mackie Onyx interface that is ideal for recording single sources at high fidelity with near zero latency.  I use it on Mac and PC all the time.  I have some lightning cable audio interfaces that I could hypothetically use with my older iPad but I haven’t used them a lot.  For quick demos on the iPad I have tended to just use the built in mic for scratch vocals or guitars and go back and replace them later with real parts recorded on a proper computer with a real interface.  If I can attach a USB-C hub to the iPad and use my Mackie interface or something like it, the iPad could be a serious contender for a mobile recording platform.  More trials required.

At this point I feel like I’ve established that I can likely use the iPad for general video editing, with some corner cases that still require a full computer and I have used iPads for demo-grade recording for years and may be able to step that up if the audio interface support pans out.  How about photography?

I like old school analog film photography (I have many Nikons…) and I also have a couple of DSLRs and, of course, old and new smartphones that can take great pics and shoot 4K video.  Basic photo management is a given and RAW image editing is also nothing I worry about.  The options are nearly limitless.  I am not clear, however, on scanner support.  Can you scan film negatives with an iPad?  What about printing?

I’m none too sure and so far I am not seeing a lot to reassure me.  It is my understanding that iPadOS just doesn’t have the underpinnings necessary to connect via USB to printers and scanners.  Also, I’m going to have to explore a bit to see how I feel about the image editing tools out there on the platform.  I’m a wizard with the open-source GIMP, which I have been using for decades in lieu of paying Adobe for the privilege of using Photoshop.  I also make heavy use of an image management application (cross-platform, open-source) called Digikam.  Neither of these is available (or at least not in a usable version) on the iPadOS operating system.  I am anticipating that like in the case of video editing and music production, the iPad will bring something to the table as an adjunct to a computer but won’t be able to completely replace one.

Finally, the big one.  Coding.  Can you code with an iPad?

The answer in this case is: that depends on what you want to code.

If you want to write iPad software using an iPad you are SOL as far as I can tell.  There is something called Swift Playgrounds which appears to allow users to learn coding with Swift and even build iPad apps of a sort but I honestly don’t know if it’s possible to publish said apps to the App Store and if you can’t do that, well, you can’t really do anything except make toys for yourself.  XCode is the industrial grade dev tool from Apple and you need a Mac for that.  But what if you just want to write HTML, Javascript, and CSS to build websites?

I would argue that anything with a text editor and the ability to render to a web browser would work and an iPad would be just fine for that.  A web developer is good to go.  If, on the other hand, you want to write Python or Java or Rust or Go or anything compiled…  Nope.  No can do.  No alternate compilers or runtimes are allowed on an iPad.  Just no. Stop.  Bad Ryan.  If that ever happened then the iPad would be a proper computer instead of a device.  If you’re some sort of lunatic who likes to code then tough noogies, the iPad cannot be a laptop replacement.  Sorry boss.

There are ways around this, sorta.  For example, one can use a remote desktop application on an iPad to operate a different computer running MacOS or Windows or Linux and via this remote control can code, but that doesn’t really count.  At that point the iPad is just a peripheral, albeit a very smart one.  If you want to experience wicked fast build times powered by the shiny M2 processor for your language of choice, too bad so sad, shoulda bought a Mac.

At the end of it all, I will be using this iPad more than the one it replaces.  I can do more with it.  It’s a better laptop than my old MacBook Pro or my Yoga for most things, it’s frustratingly limited by software for other things.  It’s not as good a laptop as the M1 MacBook Pro because that’s a proper computer, not a peripheral, but it’s smaller, lighter, and has touch, so absent a touchable Mac, there will continue to be a class of apps that excel on the iPad that are impossible on the Mac, and vice versa.  Therefore, no, the iPad cannot be a full laptop replacement for me.  Maybe for somebody else, but not for me.  And that’s OK, I guess.  It’s an amazing machine for what it is and the software front keeps changing so, who knows?  Maybe now that the iPad has an M2 and a USB-C port the peripheral support will happen.  Maybe the Magic Keyboard will be so successful that users will clamor for XCode and sandboxed development environments that will let them use iPads for software engineering and Apple will allow it even if it bites into MacBook sales.  I dunno.  I think Apple could do the ideal 2-in-1 machine and they choose not to precisely because they get to sell two machines to power users like me instead of one.  I’ll almost certainly wind up buying an M-powered laptop to go with the M-powered iPad someday when I need to upgrade something.  I’ll use the iPad every day, no doubt, but full laptop replacement with the iPad remains a wish, even as it gets tantalizingly closer to being a reality.

I am a notoriously difficult person to get a hold of.  Texts are missed for days or weeks, emails too, my social media appearances are few and far between, if I don’t recognize a phone number I don’t answer the phone.

This is not because I don’t want to talk to people or because I want to make anybody’s life difficult, but rather because the thought of picking up a smartphone for anything, even to approve a multi-factor authentication request, has become repugnant.  I have no positive feelings about the device.  I resent it.  I want to throw it in a river or toss it as high as I can up in the air and watch it smash on my driveway.  It takes an act of grit and determination to remove it from the charger, unlock it, and check it for messages.

I enjoy going to the physical mailbox.  I enjoy socializing in person.  I enjoy PEOPLE.  I even enjoy talking on the phone and hearing the voice of somebody I love.  But the smartphone represents advertising, scammers, invasive tracking, disruptive notifications, and negativity.  No joy is to be found there.  No warmth.  No positive energy.  Just a cold, dead, screen, filled with vapid content dished up by entities intent on taking my money, my time, or both.  Why on earth would I ever want to use it for anything?  I truly hate that thing.  I’d rather pick up a dog turd than a smartphone.

It wasn’t always this way.  I loved the first iPhone I had, (it was a 3G, the second iPhone iteration).  That was a fun device.  Before that I had a proto-smartphone, the Motorola Razr, and I thought it was pretty fun too.  In fact, I was pretty smartphone crazy for the first decade they were available, TBH.  Then, sometime around the beginning of the Trump presidency, the smartphone just came to symbolize all that is wrong with the world for me.  They stopped being fun and started to make me feel terrible every time I touched one.  I don’t know what to do about it.

The smartphone is the thing everybody expects you to have.  People expect you to carry it with you at all times.  I was one of the early adopters.  I get it.  To be a person in the 21st century who avoids social media and doesn’t usually have a smartphone nearby is to basically be a caveman.  But here I am trying to  figure out how it would work if I were to switch to a no-smartphone life.  Landline or feature phone only, use a tablet or something else for MFA login authentication stuff.  The only thing I would miss is GPS but my car has that built in now.

As it stands I sometimes go most of a week without even picking up my smartphone so I feel like it would be doable.

I find it bizarre that a device that I once saw as the greatest innovation ever and quite a lot of fun is now something I want to get rid of forever.

Maybe this is coming from the big stupid big corporate social media internet we have now, which I hate, or maybe it’s just that I’ve been using smartphones (and proto smartphones) for over 20 years so the novelty has well and truly worn off, but I constantly find myself thinking about how nice it would be to simplify the number of ways that people can reach me.  One of the overwhelming things about the modern communications landscape is the sheer number of things that people monitor.  They monitor texts, DMs on multiple platforms, app notifications, phone calls, and email.  It’s a lot to respond to and keep up with and I just don’t want to do that anymore.  I only need one.  I don’t need one device with 47 inboxes, notifications, or messaging apps.  I just need one place to be contacted.  When I was a kid there were two ways to reach me.  You could mail me a letter or you could call my home phone number and if somebody was home to answer you could ask for me and if nobody was around you couldn’t.  In my early 20’s I added email to the mix.  Then PC messaging apps.  Then a cell phone.  Then texts.  The options kept multiplying and I just don’t monitor all of this stuff anymore.  I don’t want to.  Bill Murray has a voice mailbox that people can call to leave him messages about things.  That’s it.  There is no other way to get in touch with Bill Murray.  That’s genius.  I need to figure out something that simple.

And then I need to “accidentally” drop my smartphone in a wood-chipper.  🙂

I started a new job a while back where, for the first time in many years, I am back to doing full time software engineering work that involves me writing code.  Frankly, I’m enjoying it.  I spent the first twenty years of my career writing code and the last ten leading and building teams of other people who write code.

When it was me doing the coding, my days tended to consist of a lot of private battles with logic and problem solving.  I felt mentally sharp and my brain felt alive with ideas and inspiration.  Then I pivoted to leadership and I enjoyed it quite a bit too but in a very different way.  I moved my attention from the very small details within a system, data and logic, up to the larger role that software development plays in the company and the world at large.  When I thought about my work it was about how to make the team better, how to make the product better, or how to improve the user experience for the customer, not about the algorithms, scalability, or testability of a particular function, method, object, or data structure.  I also focused on how to help other people become better developers, how to improve their interpersonal team dynamics, how to identify and hire great engineering talent, basically everything except the creation of software.  I was very good at that and I built some great teams and together we built some great products.  My technical role wasn’t gone entirely, I still provided high level architectural direction, reviewed and approved code changes, and even occasionally coded up a proof of concept for a new solution.  But, when it came to writing the tens of thousands of lines of code that make up a software product, I was strictly hands off.  I was a conductor, not a musician.

I missed coding sometimes because the two jobs are so radically different.  As a coder I spent my time in a cycle of code/compile/run/deploy/validate/repeat for most of any given day.  Sometimes I didn’t talk to another person for hours at a time.  As head of engineering I spent my time in meetings.  All. The Time.  Meetings with department heads, meetings with the CEO, 1 on 1 meetings with my direct reports, team meetings, scrum process meetings, design sessions.  When I wasn’t in meetings I was replying to emails.  And DMs.  And phone calls.  The job was all about communication and coordination and usually I was being looked to for the answers on any given question because I was The Man when it came to anything technology related.

After five years heading up engineering at the most recent company and also having a challenging period in my personal life outside of work, I decided that I needed a break from corporate life.  A little sabbatical.  I left the company, took a breather, and went shopping for another job.  I did not expect to end up where I am now, coding again.

Let me be clear here, there was initially a bit of “career path” snobbery on my part when this new opportunity came my way.  Hadn’t I outgrown the hands on stuff around the time Obama got elected?  The company I’m now working for is very small, a startup.  The entire company could comfortably fit in my living room.  They already have a very good CTO and don’t need two of them.  What they did need was a very senior and very skilled software engineer with a particular set of skills that just happen to align very well to my specialties.  Still, I said no, two times, when approached about the job.  I hadn’t done full time coding work in a very long time and it felt a little weird to think of going back to “my old job”, the one I had left behind over a decade ago.  But I agreed to meet with them and hear them out and they convinced me that this was, in fact, an amazing opportunity that actually fitted in perfectly with my career plans.  It might even be, dare I say, fun?  I just had to be willing to go back to my roots.

So, I took the job, surprising my wife and myself.  For two months now I have been waking up in the morning, picking up the days dev stories, and reacquainting myself with the world of the software engineer.  It’s oddly peaceful.  My wife has a job that entails a fair number of meetings every day and we both work from home so I often hear her calls taking place in the other room, in fact there is one going on right now, so I am reminded daily of the kind of job I have left behind and also reminded that this is a better fit for me where I am in my life right now.

Life is short and you need to be happy with who you are and how you spend your time.  A career takes up a massive amount of your time.  It’s no wonder that so many people come to identify themselves with their work.  “I’m Bob, I’m a banker.”  “I’m Regina, I’m a dental technician.”  I don’t self-identify with my technical career all that much.  I am unlikely in introduce myself by saying “I’m Ryan, I’m a software engineering leader.”  I’m much more likely to say “I’m Ryan, I’m a musician and writer.” if I am going to relate to a particular profession or career.

Maybe it’s because I have always seen myself as having multiple simultaneous careers?  The career that has made me the most money over the years is my career in software engineering.  The career that has had the most personal impact on my life and left the largest legacy in it’s wake is my career as a musician, recording artist, indie filmmaker, and record label entrepreneur.  The career I have been the least commercially “successful” at but that I find the most fulfilling on a daily basis is my career as a writer.

As I was writing that paragraph I found myself thinking that it’s perhaps the first time I’ve ever put it that way to myself.  I have three careers.  Well shoot.  That’s a lot.  No wonder I’m always so busy.  But it’s true though.  The dictionary defines a career as “an occupation undertaken for a significant period of a person’s life and with opportunities for progress”.  That definition applies to each of those areas of work that I engage in.  They sure aren’t hobbies.  A hobby is “an activity done regularly in one’s leisure time for pleasure”.  I enjoy building model cars, for me that is a hobby.  The same can be said for fishing, reading or playing video games.  These are all hobbies of mine.  Tinkering with old cars.  Woodworking.  Hobbies.  That’s not how I approach my careers.  I may pursue aspects of them in my leisure time for pleasure but on the whole they are a lot more involved than that.

When I look at it that way then the pivot in my technology career away from one kind of work and back to another kind of work (and whether or not that was the right move for that career) doesn’t seem like a particularly big deal.  At the end of the day I’ve only ever had one goal in my technology career and that is money.  I don’t do software engineering because of personal fulfillment, or social impact, or enjoyment, it’s just for the money.  I don’t even actually like money.  I wish I didn’t need money.  I think money is a pain in the ass.  But I live in a capitalist society and money is required so, there you are.  Since I never wanted to climb a corporate ladder and my sense of self-esteem and self-worth has never derived from my technology career or any particular job I have held in that career, the specific role I’m filling isn’t all that important as long as it works for the financial aspect of life, and this does.  In fact, moving back to a non-leadership position in my tech career has already had the effect of improving my mental capacity for my other two careers.  I’m getting creative again.

The time spent in leadership work does not support a creative lifestyle.  It is work where you say so much all day that when the time comes to try to say something in a song or prose you are just empty.  You are tired.  The only thing that comes out is hot air.  I’ve struggled mightily to pursue my creative career paths over the last decade since I made that pivot in the tech career path and became a leader.  I never liked the trade off.  I don’t think the trade off was worth it, in retrospect.  The core skills required to do leadership and the core skills required to do creative work are at odds with one another.  Leadership involves a lack of focus, the ability to flit from one thing to another, a sort of constant shifting from one thing to another.  Your brain gets used to taking in information in short bursts and every day brings a new series of distractions, discussions, and decisions.  Creative work requires focused periods of heads down concentration.  The escape from distractions and interruptions.  Freedom to disappear into a flow state for hours or days at a time.  It’s the polar opposite of being an information and people manager.  To spend 8-12 hours a day being in the “leadership” mindset and then attempt to pivot to a creative flow state on the evenings and weekends is an incredibly difficult trick to pull off and every time I managed to do so I would feel so good but then I would find that work would intrude the next day and before I knew it I would go two or three more months before I found that mental state again.  I was never able to build any sort of creative momentum because every time I found a flow state it I was back at square one.

I’m still recovering from the leadership experience, to be honest.  For the last few months I have been focused on rebuilding a creative lifestyle supported by my tech career along the lines of how it was for the first 10-15 years of my adulthood and it feels really good but I can tell it’s going to take me a while to get back to having creative momentum on projects again.  Regaining the capacity for extended focused work is one of my main missions in life right now.  I want to be able to go down into the studio and create for 6-10 hours without falling asleep or producing empty crap.  I want to be able to actually make progress on larger projects like albums, films, and books.  I need to rebuild my ability to dig in, stay in a flow state, and make things happen.  Writing software is exceptionally helpful in this regard.  It requires that state.  Give me a couple of more months and I’m going to be a new (old) man.

This was definitely the right decision.

The post Steve Jobs era of Apple has been hard on me.  I was such an Apple fanboi that I had friends who called me iRyan.  I used Macs to design and print the inserts and labels to make the first Nuclear Gopher albums when I was a teenager.  I bought the very first iMac the day it came out.  For years I blogged using a vintage Powerbook 170, the perfect portable writing machine.  I owned each of the first seven generations of iPhone and multiple iPods and iPads.  When they stopped making products that I found appealing, I didn’t really know what to do about it because I didn’t like the alternatives all that much either.  The iPhone went first, I moved to Android when they ditched the headphone jack and I still have no regrets on that score but, frankly, I despise smartphones in general so that wasn’t a very painful switch and I still prefer a phone with expandable storage and a corded headphone option and I will keep buying those as long as they are available.

The real problem was my beloved Mac and the fact that Apple let it languish as an afterthought for a decade starting around 2012.  Zero innovations.  Nothing.  They didn’t even try.  Touch UI?  Nope.  Convertible form factor?  Nope.  Reasonable minimums of RAM or SSD?  Not on your life and screw you for asking.  When they did make changes, they were generally for the worst, not the better.  Removing features and ports and wrecking perfectly good keyboards.  I’m not the only one who felt this way (https://www.wired.com/story/macbook-pro-ports-magsafe-design/). When the time came to replace my MacBook Pro as a daily driver, I looked around and found that a Lenovo Yoga was my best available option.  Thinner, lighter, faster, cheaper, better, and it even switched from laptop to tablet.  My first purchase of a Windows laptop in over 20 years.  I still love that machine.

When Apple announced the switch to making their own proprietary silicon I was a naysayer because it seemed to me that the strategy of Mac marginalization was reaching it’s ultimate end game.  The Mac would be a closed platform with a proprietary chip, limited to an App Store like the other Apple devices, not a proper computer for creative types but rather a “device” without the freedom that differentiates a proper computing platform from a device. I saw the transition to making their own chips from a cynical perspective and I.  Was.  Wrong.

The reason Apple went this direction wasn’t to sideline the Mac, it was to inject new life into it by giving it a performance lead that will be practically impossible for anybody else to catch up to any time soon.  I figured on Apple chips being roughly equivalent to Intel chips but proprietary.  It seemed to be the only way Apple could wring more money out of their ecosystem, find ways to profit more from their existing fanbase by closing the system off.  I just didn’t count on the fact that Apple had gotten so good at making high-performing low power chips that the Mac would become the market performance leader to an extent that it will shake the entire industry.

Apple didn’t just make a proprietary chip, they made one that outperforms everything else out there in terms of performance-per-watt.  The Apple M chips aren’t simply proprietary, they are spectacularly fast and they use very little power.  I am writing this on an M1 Pro Mac, it’s 9:36 PM, I have been using the machine since 8:00 this morning without plugging it in and it’s still got hours of battery life remaining.  That’s a genuinely new thing.  I’ve never spent an entire day working on a machine without plugging it in and still had power to spare.  And it’s not as if it’s like my e-book reader, low power consumption equaling low speed.  This is the fastest computer in the house by the GeekBench tests I ran.  I put this up against my Ryzen 7 powered gaming PC and it outperformed it easily.

At the end of it all, Apple was let down by Intel and that would have kept the Mac in the doldrums for many years to come.  Rather than attempt to innovate on color, form factor, etc. or risk cannibalizing iPad sales by incorporating touch into the Mac (the very things I wished that they would do), they chose to invest heavily in becoming the world’s leading chip maker via iPad and iPhone development, let the Mac collect dust over in the corner and then, when it was clear that they could make processors that were better than anything anybody else was making, move the Mac to the new architecture.  That was a long and, to me, annoying process as a Mac fan but suddenly the Mac is Back.

The new lineup of M3 Macs are the first machines in 10 years that have got my attention.  They are no longer constrained by meager RAM, they default to 1TB of internal storage (FINALLY), they have all the connections a person could want (MagSafe, an SD card slot, a headphone jack, no more dongles, it’s 2012 all over again…), they run forever without even needing to be plugged in, and they are the fastest laptops money can buy, period.  No, they don’t have touch but they also don’t have the stupid Touchbar.  No, they are not upgradeable but my 10 year old MBP tells me that I should be able to expect a seriously long useful life for a machine this powerful.  Yes, they are the exact same form factor they have been forever but so are televisions, maybe that’s just what a laptop should be, I dunno.  The convertible IS cooler but…  So, point is, they haven’t exactly re-invented the laptop, it’s more a reversion to what was working before they went off the rails but wayyyyyy more powerful.  If you want a laptop to do video editing, audio production, software development, and writing (the things I do on a regular basis), they are suddenly the best option again for the first time in a decade.  The keyboard doesn’t even suck anymore.  I did not see that coming but boy am I happy about it.

They are not limited to the App Store as I had feared either.  The M-Macs are the first actual professional grade machines Apple has made in so so so long…  I’m late coming to this confession not out of pride, no, I’m happy to be proven wrong, but because I had to use one for a while to see the difference.

I didn’t sit out the last 10 years of Apple machines.  I have been using them for work this whole time and I have continued to use my trusty 2012 MacBook Pro as well.  The laptop I was using when I left my last job was one of the final Intel-powered MBP laptops and it was…  fine.  I swapped out a Thinkpad for it and it, you know, worked and stuff.  Wasn’t noticeably faster or better, just ran macOS instead of Windows.  Yawn.  But a few weeks back I started using an M1 Pro for work and I’m like..  Ahhhhh.  I see.  The penny has dropped.  I am convert.

I’m slow sometimes.

As a software developer person I think it’s absolutely vital for Apple to have a pro level laptop again because they have the Vision Pro headset platform coming out and these are the machines that will be used by geeks like me to write software for that platform.  It wouldn’t have been possible with Yet Another Intel Laptop.  They needed something different and the ARM-powered M-chips are apparently the thing.

So, while I do actually love my foldable Yoga machine with the touchscreen and all that, I will be returning to the Mac as my daily driver.  Not yet, not today, but probably with the next revision.  The M4 or whatever.  I’m looking forward to it.  (I’m still sticking with Android though until they bring out an iPhone with an SD card slot and a headphone jack.)

As a side note…  The reason I think these machines will shake up the industry is not simply that they are very fast or that they use RISC processors.  That was true back when the PowerMacs roamed the earth.  Those were non-Intel compatible, RISC-based, and very fast for their time.  No, the reason is because of ARM.  The Apple M-chips are based on the Acorn RISC Machine (ARM) chip architecture.  So are the iPhones, iPads, and also most of the devices you own including Chromebooks and all those Android phones.  It’s already the case that ARM chips power most of the mobile computing world, people just don’t think about it much.  ARM is an alternative chip instruction set to the Intel X86 instruction set and all you really need to know is that Intel is kinda screwed here.  The performance-per-watt of these Macs is suddenly causing everybody to want to move from Intel to ARM and thanks to years of mobile devices being based on ARM, there isn’t really anything stopping the transition.  Windows will be running on ARM-powered machines too and, presumably, the Apple head start in this space won’t last forever but it’s a helluva head start.  Windows machines with ARM chips will appear that will be just as fast and powerful as the Macs but it will take some time.  This is going to be one of those big sea changes in the industry that happen every now and then but it’s going to sneak up on people in general.  I don’t think a lot of people saw “everybody moving to ARM processors” on their computer industry bingo cards but the fact is that it just makes sense.  This is the way to make chips that are very very fast and very very power efficient and it’s technology that is proven and easy to license.  Anybody can make an ARM chip and almost everybody does.  Now Apple has shown just how powerful those chips can be and it will be hard to defend the old architecture when it’s last advantage, performance, is gone.

I’m just glad there is finally a Mac worthy of the lineage back on the market and it’s just in time for all the creative endeavors I have in mind with the return of Nuclear Gopher.  Awesome.

I have COVID.  It’s something I had managed to avoid up to this point because I have a history of chronic bronchitis and pneumonia as well as asthma.  A killer lung virus was not high on my Christmas list.  The good news is that I managed to avoid a COVID infection for over two years and in that time the medical treatments for COVID have advanced to the point where my case is so far been manageable.  I was quite sick on Friday afternoon and by Sunday I was quite worried about developing severe complications so I did the smart thing and went to the Urgent Care.  The doctor agreed that I needed intervention and prescribed the new anti-viral for COVID, paxlovid.  Since I started taking it I have noticed a trend towards getting better rather than getting worse and I couldn’t be happier.

I’ve missed three days of work so far and I’m really tired and taking a lot of meds but I don’t see an ICU in my future if this holds.  Knock on wood.

Anyhow, one of the side effects of being laid out sick for a few days is that I tend to catch up on media.  Shows and movies I’ve been meaning to watch, books I’ve been meaning to read, games I’ve been meaning to play.  The last few days have been no exception.  I binged all five Dirty Harry movies, watched the second season of Russian Doll, read the final book in the EXCELLENT Noumenon trilogy (Marina J. Lostetter is maybe my new favorite sci-fi author if she can crank out this level of work consistently…  wow) and spent some time playing Beneath a Steel Sky on my new MNT Reform Linux laptop, reacquainting myself with the world of non-corporate computing and open-source in a purer form than I normally use.

What I haven’t done is make additional progress on my new album, but that’s OK.  Awkward Bodies is in the closing stages of recording our new album, which has been a ton of fun.  I still have some bass parts to re-cut and some backing vocals to lay down, but there is an album tracked and getting ready to go out in to the world.  This is very exciting to me as it represents the first album I’ve made in collaboration with a band in more years than I care to mention.  My solo album will be a nice follow on.

I’ve had some time to ponder while laying around for the last few days and one thing I’ve pondered is the fact that I am almost constantly making things, fixing things, restoring things, writing things, but at some point in the last decade or so I stopped aiming to make larger projects out of the smaller things.  On any given day I usually start and complete one or two small projects.  I write a journal entry or repair a piece of technology or build something.  So why, then, am I no longer trying to write novels, develop software applications, make movies, record albums, build businesses, or any of that?

I’ve never lost the creative urge, but I’ve lost the ambition to try to make anything coherent, larger, more meaningful.  I have many theories as to why, and I have written about them in many a journal entry.  I haven’t always even been particularly sure it was actually a problem.  So what if I am no longer trying to do anything big?  It was never really necessary in the first place, if I’m honest with myself.  I just always thought that “making a dent in the universe” had a nice ring to it.

But something else has been going on, something less about big intent and more about small habits and patterns and over the last two years I’ve become more and more aware of those changes as underlying causes.  I can’t, and don’t, blame everything on the culture or technology, but I am a person who has spent most of my adult life living in close symbiosis with technological advances in computers and communications.  It’s my job, and something I’ve been interested in since early childhood.  With each adaptation I have made to technologies (home computers, the internet, mobile phones, smart phones, social media, etc.) I have changed my habits and daily patterns.  I have very much been both master and servant to my devices and their needs.

I have finally learned that my actual thought patterns, my levels and lengths of attention, my capacity to absorb and retain and use information, my sleep cycles and physical fitness, all of these are shaped by my habits and activities throughout the day and those habits and activities are shaped by my relationship to communications technology.  I have also learned that it is possible to intentionally reshape that relationship, to regain control of it, even if my career is based in those very technologies.

I learned a long time ago from Buddhist teachers that it is very difficult to change your mind and from there change your self.  Your mind is the core of your self.  Waiting for a change of mind or thought before making changes to action is a lovely way to stay mired in your thought patterns for all eternity.  The best way to change your mind is to change your practices and behaviors and allow your mind to change in response to the new stimuli.  Ergo, if I want to have more attention span, if I want to regain the capacity for long-form creative work, if I want to redevelop the ability to be present and focused and to be ambitious with my intentions, the first step is to change the behavior patterns and practices that are creating that mental state.

So, that was what I set out to do.  I made a conscious effort to rearrange my relationships to the technologies that have mostly shaped my life for the last 30 years.

I would like to say that I had a clear plan that this was what I was doing, but that would be giving myself too much credit.  I just knew I had some unhealthy patterns that were creating negative mental states and I hoped that altering those patterns would lead to changes of mind.  I wanted to stop being tethered to screens, stop responding to a constant influx of updates, messages, and notifications, stop chasing an endless flow of information, just stop.  I wanted to start to live more like when I am backpacking.  One foot in front of the other, present with the trail, not half-connected to some fake meta-universe.  I decided to change my tech in order to change my patterns so I would change my brain.  I won’t go through everything that happened, everything I tried, but I will summarize by saying that I decided I needed a divorce from the endless feeds of social media, podcasts, and the news.  My smartphone needed to stop living in my pocket.  My computing, whenever I chose to do it, needed to be rigorously controlled, with me totally in control of the experience and nobody else’s agenda pushing into my space.  No ads, tracking, or reselling of myself to data brokers.  And last, but certainly not least, I needed to find and learn how to use disconnected creative tools so I could be creative again without depending on the devices that were disrupting my brain.

Hence, a return to typewriters.  Hence, a return to vintage, pre-internet “retro” computers.  Hence, fountain pens.  Hence, film photography.  But the retro-analog thing wasn’t even really the point. It was more important to my project that I adopt technology that was disconnected than that it was analog.  The goal was to return to focus, disconnection, presence of mind, concentration, not to make a fetish out of old gear.  So, I also adopted two very modern solutions: I acquired a standalone 32-track digital multi-track recorder so I could record music without using a computer and I acquired a computer that is entirely free of proprietary hardware and software and which has nothing on it or about it that I did not choose.

I ordered this computer a couple of years ago.  It was made by some hackers in Germany as a “free as in speech” project that was crowdfunded.  No mega tech corporations involved in making the hardware or the software.  It’s called an MNT Reform and there are only a few hundred of these machines in existence and it took over two years to get it delivered.  It was worth the wait.  It’s a symbol, sure, but as an artist I’ve always honored the power of symbols.  It’s also a tool that makes me feel free when I use it, rather than making me feel as if I’m being guided along by some invisible hand whose motives are beyond me.

I’m kitted out.  I can write, shoot, record, edit and publish without giving over my control or agency.  My communications patterns are radically altered.  I feel healthier than I’ve felt in a decade.  I don’t yet know what I’m going to create, but I can report that the changes in my habits and patterns over this stretch of time have started to create the hoped for changes in my thoughts and feelings.  I may not yet be spending extensive hours in the recording studio, but I have been enjoying spending extensive hours in the darkroom and behind a typewriter or a camera or playing a guitar.  I may not yet have written a novel, but I have found new joy in writing and spending focused time doing it, indeed I’ve developed several new types of writing practice for myself.

For many years, as far back as a decade, I’ve felt unglued, unmoored, as if the world was flying by at a pace that removed all joy or even the opportunity for it, like every day was an endless feed, nothing really mattering for more than a minute or two, nothing could really stick.  Everything was one little dopamine hit after another and nothing really made a dent.  I wondered if that was just a side effect of aging or my career or other events in my life, but the fever really took hold and broke through the Trump presidency and the pandemic and the overall insanity of world events during the last few years.  I came to realize that, yes, the world is an endless feed of events happening and, no, nothing inherently matters for more than a moment or two, if you always move on to the next thing.  And there is always going to be a next thing.  You cannot ever catch up, you cannot ever win, you cannot ever make it change.  You can, however, change your relationship to it.  You can stop being addicted to it.  You can detach from the streams and services and corporations and media outlets and technologies that thrive on your attachment to them.  You can choose to live fully in the life you have on a daily basis rather than vicariously through the ambient intimacy and perpetual thirst trap of the modern digital culture.  Sure, it might be an over-correction to replace your 5g smartphone with a quill pen you hand carved from a found turkey feather, but maybe it’s not.  Maybe it’s exactly what you ought to do.  At least for a while.  Give your brain a chance to catch up, slow down, chill out, and reconfigure.

At least, that’s how it’s looking to me.  Look at that, I just wrote over 1900 words.  It’s working.

1933 Klein-Adler 2

Yesterday I drove to an estate sale in southeastern Minnesota and purchased a 1933 Klein-Adler 2 typewriter.  I am not even sure how many typewriters I now have in my possession.  This year of our lord 2020 has turned into “the year Ryan started collecting old typewriters”.  I blame the pandemic.  Why not?

It started innocently enough.  I had a lot of pandemic downtime on my hands and when I have idle time I tend to write.  I write almost every day.  I type, I scrawl, I scribble.  Pens, computers, typewriters, I use them all.  I seem to have a non-stop need to be saying things and when there isn’t anybody there to say them to, I write them down. 

I’m writing this blog post on a computer; modern laptop with modern software on the modern internet.  Nothing particularly unusual there, right?  I have learned, however, that I don’t like modern computers for certain types of writing.  I can’t write a poem on a computer, for example.  I’ve tried.  I can only seem to do that with a pen on paper.  More insidiously, modern computers contain within themselves too many distractions and temptations for me.  The temptation to hop online and look something up and then spend the next three hours on social media or reading Wikipedia articles or stupid viral listicles instead of writing is ever present.  Even if you avoid these grosser temptations and actually do some writing, the ways in which modern computers enable real-time editing, spell-checking, auto-complete, word suggestions, and grammar correction change the nature of the writing process.  On a modern computer you can just kinda spew out whatever is on the top of your head, revise it as you go so that it has just enough polish to be dangerous, and click publish to share your work with the world.

It’s powerful, but I find that it leads to a shallower, less thoughtful and deliberate, writing process.  I have revised this post extensively as I have written it, second guessing myself, wiping out whole sentences with a click.  Last week I read the new Obama book, “A Promised Land”, and he expressed how I feel about word processing fairly well when he explained his decision to write his book in longhand on yellow legal pads by saying, “I still like writing things out in longhand, finding that a computer gives even my roughest drafts too smooth a gloss and lends half-baked thoughts the mask of tidiness”.

The “mask of tidiness” covering “half-baked thoughts” may be fine for a blog post, a tweet, or some other ephemeral bit of word salad, but when I really want to Write this is not what I’m looking for.  I’m looking for a process that will force me to really be present for what I’m doing.  I have yet to discover any better way to do this than to use a typewriter. 

Remington Quiet-Riter

This is something I have known for a very long time.  I bought my first typewriter (a lovely 1950’s Remington Quiet-Riter https://www.antikeychop.com/remington-quiet-riter-typewriter) in fifth grade when I was convinced that I wanted to grow up to be H. G. Wells and decided to write my first of many unpublished/uncompleted novels “The Second Men In The Moon”.  My parents gifted me a more modern, electrified, machine in middle school, a Smith-Corona SL500 (https://typewriterdatabase.com/Smith+Corona.SL+500.86.bmys) and I used it through high school as I wrote such unreadable classics as “The Palace of Conservative Haircuts”.  I didn’t even know about word processors until I was introduced to the Macintosh during my sophomore year in high school in a creative writing class.  I had only used computers for programming and video games, I never thought about computers as being useful for writing. 

Smith-Corona SL500

After high school I attended CDI Computer Academy and embarked on a career in software development that would span the public explosion of the internet, the invention of smart phones, and all of the other high tech innovations of the last 25+ years.  In that first year at CDI, I had a class in which I learned typing and another in which I learned to use the old DOS word processor WordPerfect.  I moved on from typewriters, viewing them as obsolete.  I don’t even know what became of my Smith-Corona SL500.  Probably a Goodwill donation or a trade in at a pawn shop for a few bucks.

Olivetti Lettera 36, aka: “The Gateway Drug” (image from MassMadeSoul.com)

This changed about eight or nine years ago when I encountered a little electric typewriter at a thrift shop that was just, well, COOL.  It was an Olivetti Lettera 36 (https://www.massmadesoul.com/olivetti-lettera-36).  I knew nothing about Olivetti, I knew nothing about their history of iconic industrial design, heck I knew almost nothing about typewriters, but the thing was just so damn COOOOOOOL and it was, like, ten bucks or something, so I brought it home.  I found out pretty quickly that I missed typewriters.  After years and years of writing on computers, the typewriter felt so radically different that it made me think differently as I wrote.  A word processor and a typewriter both end up giving you words at the end of the day but the process is just so different, it was like playing an acoustic guitar instead of an electric guitar: the thing written would be shaped by the tool used to write it.

The typewriter certainly seemed to promote more creative writing and I fairly quickly put the little Olivetti to use in my recording studio as part of my songwriting process.  When I write songs it’s usually something like this.  I get a melody and maybe one or two lines in my head.  I start listening to that part of the song and wait for my brain to fill in the rest.  Then I go grab something to preserve whatever little song seed I’m jamming on.  I will sing into a tape recorder or voice recorder app if I have to, or I will scribble down some lyrics on a sheet of paper.  Later on I will get in front of a keyboard or pick up a guitar and work out the song.  I will expand and revise the lyrics and write down the chords once I discover what they are.  The resulting song sheets are messy with lines crossed out, chords written in the margins, and sometimes whole verses and choruses in the wrong order or in totally different notebooks from each other.  Fun, but not easy to work off of when you want to, oh, say, record the song.

Now, I could put all of that mess into the computer, and I usually did, but I would always find myself wanting a printed paper copy to scribble notes on, reference, and play along with.  I forget my own chords and lyrics, especially when a song is still new to me.  I just needed one hard copy that didn’t look like the ravings of a lunatic.  The problem was that ink jet printers SUCK almost as much as my handwriting.  Seemingly every damn time I would try to print up a hard copy, the ink would be unreliable, nozzles dirty, whatever.  If you don’t print with an ink jet every day they are basically useless.  I didn’t have a laser printer with the nice dry non-shitty toner.  So, instead of working on my song I would wrestle with the printer for 20 minutes cleaning nozzles and then give up and go eat a bag of Doritos and be sad.

The typewriter solved this.  I could just turn it on, grab a sheet of paper, and type up my song.  Woot!  I was so happy with this minor miracle of convenience that I started eyeballing other typewriters at other thrift stores but I was faithful to my little Olivetti until one day when it died on me and I couldn’t make it work anymore.  Crapsticks.

Underwood #5

I wound up with a rather unexpected next typewriter, an Underwood #5 (http://www.thisisdrivel.com/typewriters/UnderwoodNo5/UnderwoodNo5.html), a machine that was probably the most common typewriter in the world prior to the mid-1920’s.  You cannot get much further from a Lettera 36 than an Underwood #5.  The thing weighs about the same as a Toyota Camry and has an equivalent amount of sex appeal.  It brings to mind adjectives such as “workmanlike”, “sturdy” and “reliable”.  Ain’t nobody slinging an Underwood #5 in their carry-on for a bit of late night writing during a weekend jaunt.

But it was functional and charming in it’s own way and I lugged the damn thing home.  After a while I partially disassembled it with the idea of turning it into a USB Typewriter (https://www.usbtypewriter.com) but I never completed the conversion.  The fact that it wasn’t working, however, did lead me to snag another machine at yet another thrift store. 

Smith-Corona “Corona Seventy”

This next typewriter was another electric from the 70’s, a Smith-Corona “Corona Seventy” (https://typewriterdatabase.com/1970-smith-corona-clipper-seventy-deluxe.3324.typewriter).  Like the Lettera 36 before it, the Seventy had a kinda cool design, was portable, and was a lot of fun to pound away on.  Also like the Seventy before it, it started experiencing minor malfunctions of the aging electric components.  This whole “old electric typewriter” thing was proving to be fairly unreliable, so, I looked for something old and cool but mechanical, no electricity.  I figured that would be less trouble.  Eventually I found a 1965 Royal Aristocrat (https://typewriterdatabase.com/1965-royal-aristocrat.13819.typewriter) and pretty much fell in love.  The Royal looked good, typed well enough, and was always reliable when I needed it.  It was portable and did everything I asked of it.  For a few years it was “my typewriter”.  But then 2020 happened.

1965 Royal Aristocrat

This year I started doing so much writing that the limitations of the Aristocrat started nagging at me, just as the irritations of trying to write on a modern computer did.  I fought with the Royal a bit more than I would have liked and started combing online auction sites, online thrift stores, Craigslist listings, and online estate sales with an eye towards finding The Perfect Typewriter.  Things went, um…  a little off the rails.

First I did research.  I watch the documentary California Typewriter.  I read blogs.  I searched for “best typewriter for writers”.  I found many opinions and much interesting information.  I started to notice that usually when the appearance of a typewriter caught my eye it was an Olivetti.  They had style.  I also noticed that most people seemed to agree that the three most “writerly” typewriters of all time were the Olympia SM-7, the Hermes 3000, and the Olivetti Lettera 32.  The internet was full of loving posts by diehard aficionados singing the praises of the three machines.  I also learned that many typewriters are essentially Lettera 32 machines with different bodies, including what is probably the most eye-catching typewriter of them all, the Olivetti Valentine.

My 1969 Olivetti Valentine

There was a pandemic on.  I was in a bad mood.  I was scared, figured that if I got COVID, with my history of chronic lung issues, I was a goner.  So, I splurged and picked up a Valentine.  It was not a thrift store special.  This thing cost a couple hundred bucks but when I saw it and used it for the first time I was bitten by the typewriter bug HARD.  We’re talking WELTS.  The resulting infection caused me to experiment with all sorts of typewriters.  All year I’ve been haunting estate sales and auctions, grabbing any unloved and unwanted Olympia, Olivetti, Smith-Corona, Adler, Remington, Silver-Reed, or Underwood that happened to strike my fancy.  I’ve learning basic typewriter restoration skills and bestowed a few machines on others who were typewriter-curious.  I have a pretty solid little collection at this point.  Art deco machines from the 30’s and 40’s, East German behemoths and Swiss beauties from the 50’s, compact and swift Italian and Japanese machines from the 60’s and 70’s, I’ve got a machine for every mood and every whim.  It’s a fun little collection and not exactly a bank breaker, since so many people consider the typewriter to be quite obsolete.

I’ve found that if I keep my eyes open, there are excellent, high quality, solid, beautifully engineered machines available all over the place for around the price of a couple stops by Super World Buffet (yes, I measure monetary expenditures in Chinese buffet visits, don’t judge me) and usually they just need paper, a ribbon, and maybe a little light lubrication and cleaning. 

This 1956 Olivetti Lettera 22 was purchased off Craigslist for $30 from the original owner in mint condition with a case.

It’s fun to have a new hobby and each time I take one of these little machines out for a bout of writing, I find myself inspired in a way I rarely am with a computer.  What will I type with today?  I don’t know, but I’m sure it will be a rewarding experience.

Edit: I went with the 1958 Smith-Corona Silent-Super.

Silent, super, writing bliss.

I keep seeing advertisements online for lite/small/basic/dumb phones.  These are usually promising to break the user away from the mind-numbing addiction to the doomscroll and allow them to once again see the world around them.  I am guessing that none of these products actually have much of a likelihood of succeeding in the marketplace because, at the end of the day, they are the electronic equivalent of a healthy diet and we all want pizza.

But I get the appeal.  I have gone to great lengths to simplify and cut back and escape the ultra-intrusive and soul-crushing miasma that is the modern internet, social media, news, hell even the gas pumps have Maria Menounos talking your ear off whenever you just want to fill ‘er up.  The world is loud.  Everybody everywhere wants a piece of everyone else, everybody wants to be viral and sticky and every available niche is being filled by noise.  It’s awful.  No wonder we tell ourselves that a simpler phone will save the day.  Seems like such an easy solution, but that’s an illusion. The phone isn’t the problem.  The phone is a delivery device for the poison of our modern culture, sure, the smart phone is to psychological poison as the cigarette is to carcinogens, but the real problem is the fascination with and addiction to the gazillion small hits of dopamine we get from ingesting the latest stupid headline, the latest trivial status update, the latest tweet, the latest Tik Tok video, the latest, the latest, the endless content ocean.

I put it to you that the mindless consumption of endless hours of low value content and ephemeral news (always mostly bad) has never, in the history of humanity, been a healthy activity.  It was a little harder to do, back in the day, I’ll give you that, but only just.  You know what the Fox News MAGA Boomers have in common with their Zoomer grand-kids?  The former keep a television on during all waking hours, feeding themselves an endless stream of targeted information chosen by an editorial staff in the service of advertisers and the latter stare at a phone during all waking hours, feeding themselves an endless stream of targeted information chosen by an algorithm in the service of advertisers.  The Venn diagram is a circle.  Only the content differs.  The narrowcast, tailored, corporatized “social” web and app ecosystem is no more diverse, empowering, educational, or conducive to free thought than the old broadcast radio and television it has superseded.  At least there were three major networks broadcasting television to our parents generation and, you know, PBS, but for us it’s one bubble, crafted by tracking cookies, collaborative filters, and virality that create an echo chamber at the personal level that gives Fox News programming a run for it’s money in it’s extreme lack of variety.

Reality has been so curated for us, our ideas and desires and personal situations, our friendships and family connections have been productized, monetized, and exploited so heavily, that we find ourselves in an almost absurd predicament as a society.  We technically have more access to all of the information in the world than any population in mankind’s history and yet on a daily basis we have to make such a violent and intentional effort to encounter it that it might as well not be there.  We are the least informed consumers, the least enlightened populace, and the most radically misinformed bunch of sad sacks that the modern post-enlightenment world has ever seen.

Of course, this is all in the service of scratching the itch of boredom.  We work at our jobs all day and we crave something interesting and corporations are really really incredibly good at giving us diversions.  Allegedly we want to know what’s happening in the world, connect with our friends, laugh at something silly, but really, it’s just that we are bored and don’t know what to do with that novel feeling in a world so filled with stimulation.  In fact, I would go so far as to say that we don’t even have a chance to get legitimately bored.  We simply find ourselves lacking a diversion, which is not the same thing.  We have forgotten how to just exist to such a level that we equate being alive with boredom.  We get an idle minute and we have to decide to be unconscious (sleepy time!) or to seek out something diverting.  Diversion wins.  We happily step into the most convenient available trap.  The Phone.  The TV.  Potayto.  Potahto.  So, you see, this isn’t a new problem and a simpler phone isn’t much of a solution.  What we need to do is learn to do nothing and have it be enough.  Allow inaction to occur.  Don’t call it boredom.  And don’t seek a diversion.  Here are some exercises you can try.

Exercise: Turn off all electronics.  Put them in a totally separate room.  Make a meal.  Eat it and give it your full attention.  Don’t shovel it in your mouth while scrolling Twitter.  Taste it.

Exercise: Switch out some piece of media consumption that you currently use a device for with it’s “obsolete” equivalent.  For example, you like your e-reader?  Read a print book for a change.  You love Spotify?  Dig out those old tapes or records or CDs from the closet and play one.  Experience the difference between streaming “media” into your bubble and the physical act of interacting with a physical piece of media.  Read a paper newspaper.

Exercise: Remember back to things you used to do to entertain yourself before you had a smartphone that you don’t do anymore.  Do that for a day.  See how it feels.

Exercise: Schedule times to be online for a week but otherwise, be offline by default.  Throughout human history, as recently as 10 years ago, most people were not carrying a phone around with them 24/7 and could not be pinged, messaged, rung up, or tweeted at and somehow, somehow, these brave ancestors survived.  Imagine a world in which your time was respected, in which nobody expected you to be waiting by the phone 24/7, nobody panicked if you went a day or two between texts, how much pressure would that take off your shoulders?  How much relief would you feel?

Exercise: Find a news outlet that is honest, reliable, and without partisan bent and (if you must consume current events) make that your first stop of the day.  Before you encounter memes, spin, or your own bubble, try to be aware of a neutral reporting of facts, sans opinions.  Then, for bonus points, form your own opinions.

Exercise: Track the trackers.  Add an extension to your browser that alerts you to how many organizations track your every move online and block them.  Observe changes in your online experience.  Opt for media interactions that don’t track you and, even more to the point, don’t monetize your activity.  Buy products, not access, copies, not subscriptions.  Companies don’t track you if you aren’t being monetized.  When is the last time you actually owned a copy of a new album rather than just streaming it?

Look, I get it, we aren’t ever getting rid of this technology.  You’re not going to live this way all the time.  These are exercises intended to make you think about the choices you’re making on a daily basis.  Practices to gain some perspective.  Things you can try doing to make yourself more aware of the ways you are being catered to, manipulated, handled, exploited, and sold.  We aren’t going back to the “good old days”.  There aren’t any.  We are, however, going to wind up in Idiocracy if enough of us don’t get out of the bubbles and into reality.  So, you know, stop reading this.  I’m not tracking you or monetizing your eyeballs but still, get offline.  Paint something.  Play that xylophone you got at the yard sale.  Read a physical book.  Sit quietly in a room and listen to your environment.  This whole online thing is a fiction and you know it.  Shoo.