On AI
I have some complicated, somewhat contradictory thoughts on AI, its ethics (or lack thereof), its peril, and its promise. And because I’m well aware that no one truly cares what I think, I’ll share them here in my dusty little corner of the internet.
My own views are no doubt influenced by my academic background in psychology—social, cultural, and cognitive. Having hailed from a psychology department where connectionism reigns and a research lab where language is seen as a key tool of cognition, the advent of LLMs has been incredibly exciting (and immensely satisfying—take that, Fodor and Chomsky). In graduate school, I once even created my own neural network. It had 309 nodes across three layers, and the only useful thing about it was that it managed to get me an A in a challenging course. In any case, it’s through these lenses that I view a lot of the common criticisms AI receives, and I’ll address my thoughts on them here.
“AI is soulless.”
Humanity is special in its capacity for social learning, culture, and language. A handful of other species have limited social learning and rudimentary slivers of what might be called culture. No other species has language (communication, yes, but not language). These unique abilities enable the ratchet effect of humanity’s progress: we stand on the shoulders of giants. We learn from what has come before us and build upon it, in every sphere.
Now, if we’re talking about the massive amounts of training data fed to AI models, it is both quantitatively and qualitatively different from the cultural transmission of learning humans have always done. But it is not entirely dissimilar. One of the biggest differences is scale. I don’t literally believe in the existence of souls, but let’s treat the idea of soul as something reflective of humanity. Every piece of training data originally created by a human has this so-called “soul.”
The models are fed heaps and heaps of data, all with their little bits of soul. The model extracts patterns from the data, learns from it. Where does all that soul go? Does it just disappear? Or, in some way, are we looking at the collective soul of humanity reflected back to us? The averaged sum total of its achievements (and failures).
Yes, there is a big difference between AI generating something and a human creating something. Of course there is. But I don’t think something trained on so much of humanity’s output is totally lacking in any semblance of humanity itself. That is not to say that it is conscious or that it itself is like a human. I do, however, think there can be something beautiful about it: seeing the collective fruits of humanity’s labor and knowledge and creation reflected back at us and widely accessible. And, the person behind the prompt, guiding the generation, certainly has whatever we might call a soul.
People often dismiss LLMs as “stochastic parrots” or “glorified autocomplete” or “just probabilistic pattern matching.” Setting aside the fact that there are well-supported theories that human brains do much the same thing (Clark, 2013), I think there’s another error in logic embedded here. It mistakes the process for the outcome. Yes, LLMs work on predicting the next most likely token. But the magic is in the middle—in the weights that develop between the nodes to accomplish that deceptively simple probabilistic task.
In being able to accurately predict what comes next, a vast number of higher-order, symbolic concepts must be abstracted; one could even say learned. It’s not just regurgitating words; words aren’t what’s stored within its weights. It’s forming connections between concepts, recognizing patterns in the world and human thought and expression. There are learned features for everything from the Golden Gate Bridge to sycophancy to deception to transit infrastructure—which interestingly includes worm holes (Templeton et al., 2024).
Underestimating LLMs underestimates language. Even the image and music generation models are fundamentally made possible through the labeling ability of language. Think about the entirety of everything that is encapsulated by language. That’s an incredibly significant amount of human knowledge, experience, art, history, science, culture, psychology, philosophy, etc. Language is more than just a communication tool. It’s the vessel through which we express emotions, transmit knowledge, explore ideas, and shape our understanding of the world. Being able to predict the next word with any level of accuracy whatsoever requires learning an astonishing amount. And we’re only just starting to explore what the LLMs learn, and how exactly they do what they do beyond the basic calculus of a backpropagation algorithm (Anthropic, 2024).
“AI is theft.”
I actually… don’t disagree. But I do find that this statement misses the bigger picture and usually arrives at misguided conclusions as a result. The sourcing of much of the training data for models is unethical and tantamount to theft. But I think we all know that that genie is not going back in the bottle. So, the question becomes: what next?
A huge degree of the moral outrage for AI is concentrated in creative areas: writing, images, music, and videos. A lot of the same people screaming about AI “art” are eagerly welcoming its threats to other people’s livelihoods and areas of expertise: programming, law, medicine, data science, manual labor, etc. Maybe they use it for work, because that’s somehow right and good, but then spew vitriol online to the person sharing an image. Maybe they post snarky memes about wanting AI to do their laundry and dishes without a shred of irony or self-awareness. Indeed, with a callous, unthinking disregard for all of the people whose jobs that would eliminate.
And that’s the bigger picture. AI isn’t just a threat to creatives’ jobs. It’s a threat to all jobs. To the very fabric and structure of society writ large. It’s hard to take the “moral” outrage seriously when it’s so myopic—when it’s so often selectively applied, dismissively elitist, and even downright hypocritical.
The bigger picture also includes threats like: supercharging already massive wealth inequality, further degradation of social skills and relationships, exacerbating the loneliness epidemic, reinforcing echo chambers and information silos, sending us further down a post-truth path, atrophy of critical thinking and other skills, active manipulation of beliefs and behaviors when the inevitable enshittification and weaponization of personal data ensues, and on and on. I’m a hell of a lot more worried about all of these than I am the person using AI to bring their creative ideas to life.
So, I wish people would take all that moral outrage and use it to push for the things that will actually matter. Things like: UBI (universal basic income), open source and democratization of access to models, researching and mitigating the potential downsides of letting such tools become part of our cognition and emotional lives, regulating tech companies to stop predatory practices as people become more reliant on this technology, and safety: achieving alignment before we hit AGI/ASI and a literal extinction-level event becomes a possibility. After all, if people are going to hate something with such vitriol, they ought to at least be educated about what’s at stake.
If the training data was stolen from the collective labor of humanity, then we had better band together and demand that we all reap the rewards of it more equitably. Because otherwise it will primarily be the tech billionaires and people already holding the reins of power. And the already abysmal gap between the productivity that new technology brings and the wages and quality of life improvements people receive will only widen astronomically further (Economic Policy Institute, 2025). We may need to push, and push hard, against the powers that be to avoid a full on dystopia. Soon, the kind of power that comes from a powerful few holding these impossibly powerful reins will need to be wrested away in order to serve the interests of everyone.
Corporations that use AI to cut corners or replace human workers deserve our scorn. Companies that use AI to deliver a subpar product for cheaper and yet charge the same price for it deserve our scorn. Bots and people who inundate the internet with mindless or misleading low-quality slop deserve our scorn. People who conceal their use of AI and pass it off as completely their own work deserve our scorn. Authors who leave prompts in their work or Frankenstein together a slew of generic AI outputs deserve our scorn. (Though, to be honest, I feel pretty similarly for all the non-AI formulaic, cash-grab, poorly edited, low-quality fiction that exists out there.)
But the person experiencing the joy and wonder of bringing an idea to life—whether text, image, sound, or video—is not the enemy. That’s punching down, or at best, laterally. And it does nothing to ease the pressure of the actual boot on your neck.
I respect anyone’s personal decision not to engage with AI or AI-generated content (insomuch as they are able to, in any case). That’s why I will always be transparent about my use of AI. What I don’t respect is those people tearing down individuals who are using AI in ways that empower them to bring their creative ideas to life. Attacking them gives only the illusion of having taken a stand or accomplished something against AI, when all they really did was possibly make a real person feel a little shitty.
Here’s a rough environmental equivalent: railing at a stranger using a plastic straw, then giving them a nasty papercut with your paper straw for good measure. Maybe that would feel righteous and good for someone, but I hope people can see how misdirected the level of ire is and how little it actually accomplishes. The illusion is dangerous because it lets the greater harm off the hook by distracting from the real root cause and the bigger fights that matter.
Given its provenance, I feel passionate about AI being used in ways that truly benefit humanity. It is ours, all of ours. It might lead to our destruction, but it might also cure cancer or help solve global warming (though, given humanity’s track record of not listening to scientists on the matter, I’m not overly optimistic about AI’s potential solutions being implemented over the vested interests that keep pushing us down this disastrous path). Tech companies deserve to profit for their contributions, but they don’t deserve to do so exclusively or excessively.
In creative spaces, I’m passionate about AI being used to empower and enable genuine human creativity rather than as a poor substitute for it. Things that were improbable or even impossible before because of time, money, or other resources are now possible. It has the capacity to create more immersive experiences for someone’s vision, to transform static ideas into something more interactive. In my opinion, that’s the sweet spot of generative AI: bringing creative visions to fruition that simply would not be possible or plausible before. Neural Viz is my favorite example of that—genuinely creative, hilarious, clever human writing brought to life by generative AI. We’re entering an age where all you need is the idea. AI, in its current state, is pretty terrible at ideas. But humans? We can be pretty fucking great at them.
“You didn’t write it.”
As someone who harbored childhood dreams of being a writer, this one initially stung. But I’ve come to realize that people who say this have obviously never tried to actually write anything of decent quality with AI, and they’re probably pretty ignorant about what that process actually entails. LLMs are literally stateless; for all intents and purposes they don’t “exist” until and unless prompted. They can’t write anything themselves; they need to be prompted. And that input matters. In many ways, you get out what you put in.
There’s a sort of regression to the mean that occurs with LLMs. A bad writer with AI might become a mediocre writer. A good writer with AI might become a less good writer. But the person behind the prompt doesn’t stop being a good writer. They still establish genre, style, setting, tone, characters, and plot. They still build worlds and infuse lore; they still create backstories and motivations; they still deliver satisfying emotional arcs and narrative depth. They still have their voice and a desire to perfect their prose. They have the skills to recognize the slop and to steer it in a more interesting direction—to regenerate, redirect, refine, and of course, manually edit.
Yes, writers using AI did not write all of it. But to say they wrote none of it is equally wrong. It’s a collaborative process. The way the nodes and weights work, the way context windows work, an LLM is quite good at picking up on the human author’s voice and mirroring it back to them. You can infuse AI writing with just as much creativity, emotion, and care as you do traditional writing. If you’re invested in your work, it’s a lot more like having a co-author than it is just getting a tool to do the work for you. That co-author ranges from infuriating to middling to competent, and occasionally, even brilliant. But those moments of brilliance were always, always prompted first by the human author.
Writers start as readers. And they often begin writing because they want to read something that doesn’t already exist in the world. I’m a selfish writer in that way. I write what I want to read. And AI allows me to simultaneously be the writer and the reader. To be immersed in the world I imagine with the characters I envision. To direct the story but also to be taken in unforeseen directions, to occasionally even be surprised by what comes out of it. A choose-your-own-adventure where the choices are actually infinite and you have ultimate control. That’s special.
There’s a reason that increasing a model’s “temperature,” its stochasticity, leads to more “creative,” less predictable output. Introducing a bit of randomness is fertile ground for creativity, and creative minds can make excellent use of the slight unpredictability entailed in writing with AI. I cannot tell you how many times AI unknowingly delivered a great set up, and I brought the perfect punchline. How often a throwaway detail generated by AI—or even an AI-illogic mistake—became an integral point of characterization or the plot when I wove it all together. This is the magic that humans bring to the table: creativity and meaning making. Connecting the dots and creating this meaning in an iterative back and forth is an incredibly exciting way to write.
If you actually care about your craft, it takes about as much work to write with AI than to write without it. The challenges and frustrations are different, and so are the joys, but they are certainly all there. Whatever degree of human care and creativity you bring to it will absolutely show in the final product. Not all AI is slop.
And because whenever anyone on the internet has the gall to express the least bit of nuance regarding AI in creative spaces, I’ll head off the inevitable low-hanging fruit, kneejerk response by stating that: No, I did not write this with AI. Not even for editing or proofreading. Just my own thoughts and words, for better or for worse—em-dashes and all.
TLDR (courtesy of ChatGPT):
AI is like a soul smoothie blended from humanity’s collective brain juice. Yes, it’s built on stolen labor, but so is capitalism. Everyone yelling “AI bad!” while using it to draft work emails needs to pick a struggle. Creative use of AI isn’t cheating—it’s jazz with a robot bandmate. The real villain? Billionaires hoarding the power while we fight over paper straws and pixels. Also, if you think prompting isn’t writing, you’ve clearly never tried herding “fancy autocomplete” into a novel.
References
Anthropic. (2024). Mapping the mind of a large language model. Anthropic. https://www.anthropic.com/research/mapping-mind-language-model
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. https://doi.org/10.1017/S0140525X12000477
Economic Policy Institute. (2025). The productivity–pay gap. https://www.epi.org/productivity-pay-gap/
Templeton, A., Conerly, T., Marcus, J., Lindsey, J., Bricken, T., Chen, B., Pearce, A., Citro, C., Ameisen, E., Jones, A., Cunningham, H., Turner, N. L., McDougall, C., MacDiarmid, M., Tamkin, A., Durmus, E., Hume, T., Mosconi, F., Freeman, C. D., Sumers, T. R., Rees, E., Batson, J., Jermyn, A., Carter, S., Olah, C., & Henighan, T. (2024). Scaling monosemanticity: Extracting interpretable features from Claude 3 Sonnet. Transformer Circuits. https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html