In 2018, a viral joke emerged on the internet: scripts centred on the concept of “making a bot watch 1,000 hours” of various content, spawning bizarre and surreal outcomes. Comedian Keaton Patti popularized this notion, suggesting that by exposing artificial intelligence models to extensive amounts of material such as Saw films, Hallmark specials, or Olive Garden commercials, one could generate nonsensical outputs. Although these scripts were likely not actually authored by bots, they captured a prevailing cultural perception: AI was associated with the bizarre and unconventional.
However, the landscape of generative AI has since evolved significantly, moving away from the realm of strangeness towards more mundane applications. As AI tools have improved, the once-surreal outputs have given way to banal interactions, leading to a shift in the perception of AI. The term “AI” has transitioned from signifying weirdness to representing mediocrity or dullness.
The insult reached new heights during the Republican primary cycle when former New Jersey governor Chris Christie disparaged rival Vivek Ramaswamy as “a guy who sounds like ChatGPT.” This remark epitomized the changing perception of AI, where being likened to AI implied unremarkable or uninspired qualities.
Part of this transformation can be attributed to the advancements in AI technology, which have enabled more accurate and coherent outputs. Early generative AI models struggled with limitations such as limited memory and difficulty maintaining narrative coherence, resulting in surreal and disjointed outputs. However, newer iterations of AI, like Sudowrite built on OpenAI’s GPT-3.5 and GPT-4 models, are capable of producing text that closely mimics clichéd genre prose.
Additionally, the commercialization of AI has led to its proliferation in various domains, often resulting in low-quality outputs aimed at maximizing revenue rather than providing genuine value. AI image generators, once viewed as experimental artistic tools, are now associated with poorly executed stock art and invasive deepfakes.
Moreover, concerns about the safety and reliability of AI tools have led to the implementation of guardrails and training protocols, limiting their ability to engage in creatively unorthodox uses. Some AI models, like ChatGPT, now exhibit a reluctance to participate in scenarios outside their designated scope, opting for safe and predictable responses.
Despite these shifts, there remains potential for creative AI use in the future. With continued advancements in technology and a better understanding of AI’s capabilities, we may see AI tools that excel at remixing information in innovative and unexpected ways, amplifying human creativity rather than replacing it. However, for now, the prevailing perception of AI as uninspired and unremarkable persists, making it unlikely that anything “like a bot” will be met with enthusiasm anytime soon.