I Don’t Want To Read Your LLM Output

  • Post comments:0 Comments
  • Reading time:7 mins read

There’s something uncannily disrespectful about opening an email that I suspect was generated by a Large Language Model, especially when that email is asking for a reply or an action. An exchange of my very human effort for your AI tool’s output? I don’t think so.

I’ve been thinking about this quite frequently as the number of probably-AI emails in my work and personal inboxes has slowly ticked up since the first public release of these tools in late 2022. I won’t claim to be able to identify LLM text with perfect accuracy, but I think I’m pretty decent at it. I’ve also been struggling to verbalize why I find AI in my inboxes so much more offensive than clicking onto an AI-generated blogspam article from internet search results — and I think it has a lot to do with the implied personal nature of email.

There’s something even more disrespectful about reaching out to someone whose opinion, advice, or expertise one values, and their response starts with “Here’s what [AI Model] thinks…” Please just tell me that you don’t have the time, or ignore the email entirely. I’d be less insulted. What does copy/pasting an LLM’s output as a response even accomplish? After all if I have access to the same AI tool I could have input my own email as a prompt and gotten a roughly similar response. Obviously nobody is obligated to invest their time on behalf of anybody else soliciting it, but using AI for a response is essentially saying “I feel socially or professionally compelled to respond to this email, but I truly could not care any less beyond that. My perceived obligation is fulfilled and this interaction is concluded with only a handful of seconds lost on my part.”

There’s additionally a clear delineation between emails from a person versus a business, and among these corporate emails a further divide between promotional emails (most of which I have aggressively unsubscribed from) and automated account notices such as “your credit card payment will be processed soon.” The latter you expect to be machine-generated, cobbled together nearly instantly by a script, that gets glanced at for three seconds (if that) by the recipient and discarded. To nobody’s surprise, we probably all expect email from a person, that we know is a real-life existing human we’ve seen with our own eyes, to be written by said human. But it’s no longer the case.

Like hundreds of millions of other people, I have a Gmail account. I don’t normally visit the web client, so I have no idea when Google started adding Gemini to offer to “help” drafting emails. I got a pop-up about it the other day when clicking to compose a new email. Gemini offered to write “an apology to my child’s teacher for their absence.” I don’t know why it made that specific offer, but I clicked okay and the email body pre-filled with a template for me to swap out some names. I opened another blank email, and Gemini then offered to write “a thank you letter for my job interview.” I haven’t interviewed for any jobs recently, but I accepted its offer once again, and got the following:

Dear [Interviewer Name],

Thank you for the opportunity to interview for the [Job Title] position earlier today. I enjoyed learning more about the team and the goals for the [Department Name] department.

Our conversation further confirmed my interest in the role and my enthusiasm for the work being done at [Company Name]. I am confident that my background and skills would allow me to contribute effectively to your team.

Please let me know if you have any additional questions or require any further information from my side. I look forward to hearing from you.

Best regards,

Jacob Desforges

Email templates are nothing new. You’ve been able to find them on the internet for about as long as email has existed. So with that in mind, this probably isn’t “AI,” not even by the very low standard of LLMs. Gemini offers to recreate the template. It swaps a few words around. Is it actually doing anything in real-time, or does Google just have a few pre-drafted emails for common contexts the user might input? Does the line between simple computer code: if context = “job interview” and “thank you” retrieve random.choice(interview_thanks), versus a statistical inference model operating on the same context window and generating the text in real time even make any difference? Does it make it any less soulless?

If I use Gemini to draft an apology email, am I truly sorry? If I use it for a thank you letter, am I actually thankful?

If there was a theoretically unremovable “generated with AI” tag at the bottom of that email, would you still send it? Or would you then write it yourself, worried that the stamp of using an LLM tool makes it look like you’re trying to fulfill a social expectation to extract the maximum value from the exchange with minimal investment of any real effort on your own part? Behavioral psychologists would probably argue that’s what we were all trying to do in most social interactions prior to AI anyway.

This all amplifies unfairness, however. Anyone who has used LLM models has observed how happily verbose they are, easily expanding a three or four sentence prompt into three or four paragraphs of textual fluff. If I receive such an email from a superior at work, I’m then obligated to read and respond to it by the terms and expectations of my employment.

Clearly the only content they care about is in the prompt that was provided to the LLM. So if just the prompt was sent to me via email, regardless of formatting, I’d spend less time reading and parsing it. Sending the LLM email might appear more structured and formal but by definition probably doesn’t contain much more actual information than the original prompt.

And the ultimate irony is that in the extra wasted time for a human to read the LLM fluff, was there even any net savings? Like what if this is all a zero-sum game (or worse): the original author of the LLM email saves some time on formatting, but the recipient spends the same (or more) amount of extra time sorting through the fluff to extract the actual important points and action items?

The logical conclusion is a race to the bottom, in order to even out the unfairness of investing human effort to respond to a bot’s output when a response is obligated by the social contract. “You wrote me an LLM email, so I’m going to use an LLM to respond to your email because you clearly don’t value my time.” I’ve already seen this developing in the real world. A coworker drafted a seven page guidance document, which though not explicitly admitted as AI-generated, had all the common hallmarks of LLM text: excessive verbosity, a wandering-yet-linear structure, frequent insertion of bulleted lists, bolded “topic introductions” followed by a colon at the start of each paragraph, and so many em dashes that even R.A. Salvatore would question if it were a few too many.

We got another email a few hours later. A different coworker had used AI to summarize the probably-AI document into a dozen or so bullet points. Amusingly, the output probably resembled something very similar to the original prompt that the document’s “author” had inserted into the LLM.

Algorithms talking to algorithms, via human proxies, and nearly nobody stops to ask if anything is being gained in the process, let alone examining what is being lost. Use the AI for basic research or troubleshooting your code if you really want. But if you want to communicate with me, do it in your own words.

Leave a Reply