Writing a blog in 2023 next to the incredible proficiency of contemporary AI is like hunting a sea lion while sitting in a wobbly skin-on-frame kayak with a wooden harpoon while a gas-powered waterborne factory plaughs a trawl net through the sea floor, scooping up countless fish, sea flora and junk with mechanical zeal—outclassing the lone fisherman’s performance to the point of relative embarrassment.
The AI models of today that have captured our imaginations grow in capability through the infinite rivers of yotabytes that human civilization produces:
That’s almost every sound, every song, every image, every video, every conversation, every artifice, every known and studied organism, every design, every book, every blog, every review, every stream—available on the web and ripe for parsing into a data set.
The gestalt of our noisy behavior as documented on the web, distilled into a superhuman signal, a well-trained model is the sleepless collaboration of billions of human beings, alive and dead, and our collected, purchased, and scraped atomic actions.
That figurative AI-trawler next to my wobbly blog-kayak is fueled by the data we feed it with our every action; the more data we produce and the more processing power dedicated to parsing it, the greater its capability.
And, for now, this process of AI maturation is only impeded by:
- the cost of electricity1 2 and availability of silicon,3
- the (rapidly rising4 5 6) ceiling of investment into AI technologies,
- the number of transistors we can cram on a chip,
- the gradual progress of human ingenuity and entrepreneurship as we figure out potential applications for AI,
- and facile legalities of intellectual property7 8 9 10 11 12
that can be circumvented by surreptitiously scraping the web with
curlscripts and burying the exact sources of data within the incoherent noise of countless other sources used to train AI models.
When every interaction online can be the product of a digital homunculi, when every voice (audio and textual) can be deconstructed and repurposed to enact the fantasies of humans and computers like shiny pieces of Lego, every face can be stitched onto a virtual marionette or subsumed into the construction of a wholly nonexistant human character who enthusiastically reviews cheap products on Amazon for ¢s apiece—to some theorists, the internet as we knew it, as a forum where humans convene to connect and make things together—is ‘dead’13 and conspicuously devoid of organic life.
All of this begs the question:
Why am I writing a blog in 2023?
Where’s the connection—the authenticity?
The artifacts we create, blog posts included, can be ways of connecting with others; fractional and curated insights into the minds of other human beings. From these insights, we can learn about the experiences, perspectives, knowledge, and values of others.
This authenticity is like the fingerprints of a sculptor preserved on a clay pot; an expression of their wholly unique condition. There’s something intimate about a fingerprint, a strand of DNA, and somebody’s creations; they’re compressed glimpses into an individual’s experience of life. There’s a personal-truthfulness to them; an ‘authenticity.’
But if you’re looking for that kind of connection here, much as with the cover letters, resumes, university applications, marketing copy, graduate theses, and content of today, anything written in my blog could’ve been wholly or partially scribed by a machine:
Are these thoughts even my own?
Are the mispellings and syntax errors just contrived to create an illusion of authenticity?
Did I even write this?
You’d have to talk to me in person, or scrutinize the journals I’ve written by hand, to figure out some pattern of thought and see if my posts here echo the shape of that voice.
And even then—I’m sure these articles could be entirely written by generative
AI, having scraped whatever Markdown notes and
.odfs I have scattered on my
Where does that leave us?
I think this is a great moment to be authentic about the whys and hows of
my approach to this
The Whys & Hows
1 · This blog is an experiment in informatic self-determination.
By publishing anything on the web, we’re inevitably training numerous models with our verbal, behavioral, and creative fingerprints.
I don’t think there’s any way to completely sanitize the dozens of data lakes out there of my fingerprints, and much of that data is going to fuel AI models anyhow (quietly bought or silently scraped), so I’ve decided to actively participate in this process in an attempt at informatic self-determination.
This blog is an experiment in consciously contributing the fingerprints I’d like to leave behind to the machine learning models that will eat them up anyhow.
Just as a
developer committing their code to GitHub understands that their uncompensated
work will be used to train GitHub Copilot,14 I’m posting here aware that
whatever I write will
be hoovered up by Googlebot or a
wget (or something) from OpenAI (or whoever)
to train something that could be much more performant than I am, and mixed
together with the behavioral exhaust and knowledge of who-knows-how-many-other
This blog is an experiment because I’ve got little clue of what the results of maintaining it could be:
Is blogging a waste of time?
Is it a good thing?
A bad thing?
Is it of any consequence at all?
Do I stand to gain anything?
Would anybody read this despite the unbelievable overabundance of competing content available for consumption in the attention economy?
Who knows! Let’s find out.
2 · This blog is an exercise in human expression.
I think there’s a risk of generative AI deterring creative expression with its incredible convenience:
Why try to manually carve my way through words to express myself when there’s tools available that will choose my vocabulary for me?
An overreliance on AI for creative output and expression and the resulting lack of creative exercise could atrophy one’s ability to create and adapt.
Take the generation effect, for example: We recall information better if we generate it ourselves (through searching for it in our semantic memory) than if we’ve simply read or heard or have been ‘given’ it through some form of consumption.
A seemingly infallible and highly performant AI can give us answers to our questions and generate awesome artifacts at speeds far greater than our own, but this comes at the cost of us not having to generate this information ourselves; all we have to do is request it with a prompt and consume the output.
This lack of cognitive challenge can atrophy our ability to ‘get good’ at something and build expertise; our recall isn’t being strengthened through the generation effect.
If we can’t express and process our thoughts, experiences, and values well because we’ve become overreliant on the assistance of generative AI, we’re vulnerable whenever our technological augmentations fail to perform, placing us in the uncomfortable situation of using our atrophied abilities. This is the same sort of atrophy of knowledge, skill, and memory Nicholas Carr centered in The Glass Cage: As our technology becomes more capable, we risk becoming less capable ourselves when we come to use it as a crutch.
So I’m using my blog is an exercise in expression: It’s a way for me to keep exercising the cognitive ‘muscles’ of self-expression, sensemaking, and creativity.
My identity’s been wrapped up in these skills—I don’t want to let ‘em go, so I feel compelled to keep them sharp. That’s what you’re seeing here.
3 · This blog is intended to contribute variety and diversity.
Imagine that, say,
80% of content on
the web is authored by generative AI—as it could be in a few years. At
that point our web-traversing AI models will be training on the output of
web-traversing AI models; content, art, discourse, and ideas become tightly
This could result in an awesome singularity-like renaissance that ultimately deprecates our mortal species, or a feedback loop of creative decay that underrepresents the rich diversity of human thought and experience, perpetuating static monocultures enforced by machines flooding cyberspace with homogenous content, unleashed for fractions of a ¢ per kilobyte and never sleeping.
And that would be a somewhat idyllic scenario, where we aren’t even accounting for the reality that these feedback loops will be used to exacerbate hatred, inequity, confusion, inaction, unhappiness, and division through unceasing firehoses of cacophonous same-same content, on top of drowning out the variety and diversity of human expression.
So why not keep posting?
If anything, this is a great time to be expressive;
8,053,713,873 of us (as of copying & pasting that figure) on planet
Earth; each subject to conditions totally unique to anyone else by birth
happenstance and sheer entropic chance. You and I might have a lot in common;
maybe you’re also a neurodivergent tech nerd from the same corner of Earth, with
the same exact cultural heritage, the same exact convictions, the same exact
memories, the same exact physiological condition, and the same exact DNA—but,
you know, I doubt it. These microscopic differences add up;
the reality is that we’re each as unique as unique can conceivably get—as unique
as our fingerprints.
There’s an evolutionary advantage to this expanding diversity; after all, it’s the tons of rich, complex, diverse data that we produce that has made generative AI possible, and its miraculous ability to produce as many permutations of an idea, image, or artifact as there are permutations of people. This creativity isn’t alien—it’s our collective creativity.
There’s also cyberneticist Ross Ashby’s Law of Requisite Variety: In order for a system to achieve equilibrium within its environment, it must have at least as much variety as its environment. Put another way—(if you’re down with the Law of Requisite Variety)—we need as complex and varied mechanisms of adaption and self-regulation as there are complex and varied challenges to our wellbeing. This applies to any system—individuals, teams, societies, companies, countries, species: Variety makes a system adaptive; resilient.
So I’m writing this blog because I want to contribute variety and diversity to the space of information—I’m sure plenty of this will be inconsequential, but if any reader, bot or human, finds an idea here compelling enough to carry elsewhere, (I think) I’ve done my part. And, hey, I’d be happy if this were to make anyone less discouraged about the value of their own expression and experiences.
4 · This blog is an exploration of human-computer interaction.
We’ve been living in an exciting and interesting time:
The latest headlines in tech space are abound with news of algorithmic managers,15 AI advisors,16 dubious data scraping by vendors to feed AI models,17 AI-exacerbated unemployment,18 19 AI outperforming knowledge workers,20 21 22 23 24 25 AI-fabricated misinformation,26 AI reading the minds of humans27 28 and mice,29 the the psychosocial wormhole of AI companionship,30 31 AI-driven suicide,32 AI-powered scams,33 34 AI-enabled revenge porn,35 36 37 AI content moderators suffering PTSD,38 39 AI’s worsening of existing systemic biases,40 41 AI’s integration into deadly combat technology,42 and the well-informed warnings of modern AI’s progenitors and others about the great potential of AI to cause harm—43 44 45
And all of this sounds profoundly negative; maybe paralytic—but I don’t think any of it should deter anyone from active participation in and awareness of technological advancement:
- Negativity stimulates our evolved propensity for identifying threats. It’s the psychological engine behind maximizing user engagement and clicks—so let’s take a deep breath and make sure we’re not catastrophizing for the profit of somebody or at the cost of ignoring opportunities to adapt, learn, and understand.
- Our technologies are becoming more powerful, affordable, and ubiquitous. If we understand these technologies, we can make wise decisions about the roles and applications they play in our lives at the very least and participate in the design and shape of these technologies at the very best.
I’m too much of a computer nerd to be a neo-Luddite:
In the past we’ve harnessed the memetic legacy of the computers used to aim ICBMs to do miraculous stuff like connect us with folks from across the planet with the Internet. I think there’s potential for our ever more capable tools and technologies (including AI) to affect positive change. There’s no uninventing generative AI, just as there’s no uninventing the nuclear bomb; I think we should actively examine and discuss the ramifications of our technologies and figure out how to respond to them.
So despite the arguably doubtful value of dumping more creative flotsam into the
Internet at the dawn of generative AI, I’ll be writing about human-computer
interaction here, at
We’ll explore our relationships with our information technologies and the kaleidoscope of complexity within, and I’ll try to keep it as cool, insightful, and thought-provoking as possible.
My particular experience will shape my output here like the fingerprints on a clay pot, and I hope this invites you to consider your own unique experiences with technology, your own creative output, and your ability to contribute diversity and variety to the systems you value.
I’ll be using my blog as an exercise routine to keep myself cognitively sharp and nurture facets of self-expression, curiosity, and creativity.
Lastly—this blog’s an experimental bauble of deliberate informatic offering to the human and computer consumers of the web. I’ve got no idea what consequence it could have, if any, so we’re gonna find out.
Whether you’re a computer, a computer-user, or something in-between, I appreciate your hearing me out for my first blog post. There’ll be no comments section here, but you can always contact me with your own opinions and insights.
Here’s my citations—I figure that if you’re here, you might like learning, and you might be the type who’d find it enjoyable to further research whatever I’m writing about:
Knight, Will. “AI Can Do Great Things—if It Doesn’t Burn the Planet,” WIRED, January 21, 2020. ↩
Saul, Josh, & Bass, Dina. “Artificial Intelligence Is Booming—So Is Its Carbon Footprint,” Bloomberg, March 9, 2023. ↩
Magubane, Nathi. “The hidden costs of AI: Impending energy and resource strain,” Penn Today, March 8, 2023. ↩
Sor, Jennifer. “AI could power the US economy as investment in the sector is poised to hit $200 billion by 2025, a Goldman Sachs says,” Business Insider, August 2, 2023. ↩
Loucks, Jeff, et al. “Future in the balance? How countries are pursuing an AI advantage,” Deloitte Insights, May 1, 2019. ↩
Hodges, Will. “PwC US makes $1 billion investment to expand and scale AI capabilities,” PwC US Newsroom, April 26, 2023. ↩
Brittain, Blake. “Getty Images lawsuit says Stability AI misused photos to train AI,” Reuters, February 6, 2023. ↩
Setty, Riddhi. “AI-Assisted ‘Zarya of the Dawn’ Comic Gets Partial Copyright Win,” Bloomberg Law, February 22, 2023. ↩
Appel, Gil, et al. “Generative AI Has an Intellectual Property Problem”, Harvard Business Review, April 7, 2023. ↩
Tiffany, Kaitlyn. “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago,” The Atlantic, accessed August 31, 2021. ↩
Vincent, James. “The lawsuit that could rewrite the rules of AI copyright,” The Verge, November 8, 2022. ↩
Agence France-Presse in Bucharest. “Romania PM unveils AI ‘adviser’ to tell him what people think in real time,” The Guardian, March 1, 2023. ↩
Isaac, Mike. “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems,” The New York Times, April 18, 2023. ↩
Greenhouse, Steven. “US experts warn AI likely to kill off jobs - and widen wealth inequality,” The Guardian, February 8, 2023. ↩
Singhal, Karan, et al. “Towards Expert-Level Medical Question Answering with Large Language Models,” Google Research, May 16, 2023. ↩
Ayers, John W., et al. “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” JAMA Intern Med, 2023;183(6):589-596, April 28, 2023. ↩
Sloan, Karen. “Bar exam score shows AI can keep up with ‘human lawyers,’ researchers say,” Reuters, March 15, 2023. ↩
Varansi, Lakshmi. “A ChatGPT bot passed a Wharton business school exam, but a professor says he would’ve only graded the effort a B or B-minus,” Business Insider, January 23, 2023. ↩
“Generative AI and the future of work in America,” McKinsey Global Institute, July 26, 2023. ↩
Saenko, Kate. “‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation,” The Guardian, December 14, 2020. ↩
Tang, Jerry, et al. “Semantic reconstruction of continuous language from non-invasive brain recordings” May 1, 2023. ↩
Takagi, Yu, & Nishimoto, Shinji. “High-resolution image reconstruction with latent diffusion models from human brain activity”. December 1, 2022. ↩
Schneider, Steffan, et al. “Learnable latent embeddings for joint behavioural and neural analysis”. ↩
Zeitchik, Steven. “Meet ElliQ, the robot who wants to keep Grandma company,” The Washington Post, March 16, 2022. ↩
Chen, Alicia, & Li, Lyric. “China’s lonely hearts reboot online romance with artificial intelligence,” The Washington Post, August 6, 2021. ↩
Xiang, Chloe. “‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says,” Vice, March 30, 2023. ↩
Puig, Alvaro. “Scammers use AI to enhance their family emergency schemes,” U.S. Federal Trade Commission, March 20, 2023. ↩
Sweney, Mark. “Darktrace warns of rise in AI-enhanced scams since ChatGPT release,” The Guardian, March 8, 2023. ↩
Hao, Karen. “A horrifying new AI app swaps women into porn videos with a click,” MIT Technology Review, September 13, 2021. ↩
Hunter, Tatum. “AI porn is easy to make now. For women, that’s a nightmare,” The Washington Post, February 13, 2023. ↩
Tenbarge, Kat. “Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy,” NBC News, March 27, 2023. ↩
Rowe, Niamh. “‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models,” The Guardian, August 2, 2023. ↩
Marks, Andrea. “Bestiality and Beyond: ChatGTP Works Because Underpaid Workers Read About Horrible Things” Rolling Stone, January 18, 2023. ↩
Buolamwini, Joy. “Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It” Time, February 7, 2019. ↩
Vincent, James. “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day,” The Verge, March 24, 2016. ↩
Romine, Taylor, & Elamroussi, Aya. “Amid public outcry, San Francisco officials reverse course and reject police use of robots to kill,” CNN, December 7, 2022. ↩
O’brien, Matt. “Sam Altman pleads with Congress to regulate ‘increasingly powerful’ A.I. systems like his ChatGPT,” Fortune, May 16, 2023. ↩
Joel Elizaga is a UX engineer based in the Pacific Northwest.