The contemplation on the potential harm of knowledge sparked as the investigation delved into the ongoing efforts to automatically detect deepfakes in anticipation of upcoming elections globally. Since 2017, when the term first emerged within the context of manipulated images, the task of identifying AI-generated content has become progressively challenging. The progression has now brought experts to the brink of struggling to discern such content, prompting a race against time to develop automated systems capable of detecting and labelling it before surpassing the expertise of even seasoned professionals in the field.
However, merely labelling such content may not suffice. Henry Parker, head of government affairs at fact-checking organization Logically, notes the limitations of labelling in mitigating the impact of deepfakes. Despite employing both manual and automated vetting methods, Parker highlights the profound influence of social psychology, suggesting that viewers may still perceive deepfakes as factual even when forewarned. This phenomenon raises the question: could such content be deemed a cognitohazard, possessing such realistic allure that viewers instinctively accept it as reality despite contrary information?
The concept of cognitohazards extends beyond digital manipulation to include stimuli that captivate attention, potentially leading to involuntary fixation. Unlike emotions, which are often beyond conscious control, attention is typically regarded as subject to deliberate regulation. Yet, in the digital age, where attention is a scarce commodity fiercely contested in the attention economy, certain technological innovations border on becoming true cognitohazards. Examples include “clicker” or “idle” games, designed to exploit reward mechanics and entice users into prolonged engagement, often resulting in significant productivity loss. Similarly, the phenomenon of “domino videos” demonstrates how non-interactive content can ensnare attention through orderly yet tantalizing progressions, prompting viewers to linger far longer than intended.
While these examples may represent the current extent of attention manipulation, the potential consequences of future advancements, particularly with the widespread integration of generative AI, remain speculative. The prospect of AI-driven content designed to captivate attention on an unprecedented scale raises concerns about the need to educate individuals, especially young children, not only on online interactions but also on the discernment of digital content. As technology continues to blur the line between fiction and reality, the implications of cognitive manipulation warrant careful consideration.