I’m going to really date myself here. Back in the 1970s and 1980s, a series of commercials for Memorex tapes asked the question, “Is it live…or is it Memorex?”
See, there used to be audio and video cassette tapes that you could use to record music and video, respectively. The commercials were supposed to convince consumers that Memorex’s tapes were of such high quality that sound (or video) captured on them was indistinguishable from the real thing. It certainly made an impression, becoming a part of pop culture alongside other commercials, like Wendy’s “Where’s the beef” and “Pardon me, would you have any Grey Poupon?” And, to the extent that me and a few other people still remember them 40 years later, they were successful.
I reference Memorex not just to make the point that I am old and out of touch, alienating anyone young enough not to have looked forward to, then been disappointed by, and then come to really enjoy that new Star Trek series that didn’t even have Kirk or Spock. I’d like instead to riff on the differences between something live and real and a simulation. Does it really matter? And I’m going to do it, hopefully, without ending up in a Philip K. Dick novel (this was a question that he played with throughout his writing). So let’s take a step on an existential journey.
AI is a hot topic. Supposedly, it will make life easier for people by automating tasks like creating art and writing. My first reaction to it, like my first reaction to the food delivery robots zipping around campus, was, frankly, not pleasant. There is something unnerving about the inhumanity of robots and AI—the knowledge that while they may appear to be doing tasks that humans do, there is no life or purpose underlying any of it. Maybe it’s a fear that the AI will eventually figure out it doesn’t need us around, or that its priorities might not align with our priorities, as in this vision of the USS Voyager’s AI Emergency Medical Hologram deciding on the only “humane” course of treatment for Captain Janeway.
There are contrary opinions, of course: AI is the future, so it’s best to embrace it; Luddites have been skeptical of every change, and most of them worked out fine; AI can handle writing, drawing, and composing so that humans have time to devote to creative activities instead (seriously, that’s a justification that an AI itself has deployed); and so on.
The core of the question isn’t how long it takes an attorney to write a brief, or how long it takes someone like me to write a blog post. I will admit that AI could write a 1,000-word blog post much more quickly than I can, but I am confident that it wouldn’t be worth reading, because there wouldn’t be anyone behind it.
Most writers, after all, write to connect with people. At least I do. There is something incredible to me that someone can think seriously about something, write it out, and have other people read it and find something of value. Using AI to concoct a series of words that (to borrow a phrase from Douglas Adams) are almost, but not completely, unlike real writing not only feels cheap—it undermines the basis of communication, which is trust.
Because you can have AI write something, but unless you scrupulously proof read it, you’re going to end up either bland, smoothed over vanilla that uses words without saying anything (something that works for politicians, but not for those who want to genuinely connect), or a Frankenstein’s monster of cobbled-together phrases that reads like someone pretending to write. And if you’re going to proof it that scrupulously, why not just write it yourself?
I’ve heard it said that AI can generate ideas for a first draft, that a good writer can use this as the springboard for their own subsequent drafts. But the AI itself isn’t going to be original, just recycled, homogenized verbiage. For inspiration, why not read something original, like Douglas Adams, from whom I borrowed a turn of phrase above? Or anyone of thousands of writers who have had something to say?
Writing this existential journey, I’ve had to cope with the fact that, to be honest, I’m not a fan of AI because it threatens me personally. I’ve made a career and identity out of being able to write quickly and with, at the very least, the appearance of quality. It got me through grad school, but first it got me promoted as a casino security officer (incident reports don’t write themselves, after all). Between the books (averaging 80,000 to 100,000 words each) and about twenty years’ worth of academic articles, pieces for local weeklies, blog posts, and random ephemera, I figure I’ve got at least a million words out there with my name on them. So the thought that an AI can do what I do more quickly and cheaply is not just frightening—it calls into question my self-worth.
Maybe, then, I am biased. An AI (presumably) wouldn’t be, although the algorithm that feeds it is notoriously biased. But alongside that bias I bring passion and vulnerability. That’s what I mean by saying that there is someone behind these words, listening to J. Dilla and valiantly resisting the urge to spend the afternoon watching Janeway videos. I hope that means something on the other side.
This might be the point that I have been looking for, the one that connects what I fear has devolved into an anti-AI screed to my ombuds practice, which this space is supposed to illuminate. If someone has a problem, could they visit an AI ombuds, which could disclose their available options? Possibly. Maybe it even could have an algorithm that would disclose the relative odds of each approach ending in a positive outcome. It could analyze, certainly, but it could never listen.
There’s a reason that I lead any discussion about what an ombuds does with, “listening.” Honestly, some days I feel that 90 percent of the good I do is listening. Because there are times when someone already knows the options, and just needs someone listen to them talk through them, to validate that they have tough decisions to make, and help them get the information they need to make the most informed one.
So this is my plea to someone who is reading this, has an issue, and doesn’t know where to turn. Get in touch. Because I will listen.
Whether you are a student, faculty member, or other UNLV employee, the Ombuds Office has many resources available to help you through any conflict or communication issue you might be facing. If you are having an issue and are uncertain where to go, it is an excellent zero-barrier first stop. You have nothing to lose and quite a bit to gain.
If you would like to talk off-the-record and confidentially about any work- or campus-related concern, please make an appointment with the Ombuds. Our door is always open.