In a groundbreaking development, researchers have harnessed the potential of GPT1, the precursor to the AI chatbot ChatGPT, to translate MRI imagery into text, providing insights into an individual’s thoughts. This breakthrough, achieved by scientists at the University of Texas at Austin, allows for the continuous flow of text to represent a person’s thoughts based on their auditory experiences, imagination, or visual stimuli.
While this advancement in mind-reading technology brings new possibilities, it also raises profound concerns regarding privacy, freedom of thought, and the unhindered freedom to dream. Existing laws are ill-prepared to handle the widespread commercial use of such technology, as freedom of speech laws do not extend to safeguarding our inner thoughts.
During the Texas study, participants spent 16 hours inside an MRI scanner, listening to audiobooks. Simultaneously, a computer “learned” to associate their brain activity with the corresponding auditory input. Once trained, the decoder could generate text from an individual’s thoughts as they listened to a new story or crafted one in their imagination.
While the researchers acknowledge that the process was labor-intensive and the computer only captured the essence of the thoughts, it signifies a significant breakthrough in the realm of brain-machine interfaces. Previous non-invasive devices could merely decipher a handful of words or images, making this advancement all the more remarkable.
As an example, one subject listened to the following passage from an audiobook: “I got up from the air mattress and pressed my face against the glass of the bedroom window, expecting to see eyes staring back at me but instead finding only darkness.”
Here is what the computer “read” from the subject’s brain activity: “I just continued to walk up to the window and open the glass. I stood on my toes and peered out. I didn’t see anything and looked up again. I saw nothing.”
To preserve the privacy of participants’ thoughts, they actively cooperated in training and applying the decoder. However, the researchers caution that future developments could potentially enable decoders to bypass these requirements, implying that mind-reading technology may be applied to individuals without their consent.
Further research aims to expedite the training and decoding process, as the 16-hour training period is expected to decrease significantly in future iterations. Additionally, the accuracy of the decoder is likely to improve over time, as seen with other AI applications.
This advancement marks a transformative milestone, as researchers have long focused on creating mind-reading technologies primarily for medical purposes, aiding individuals with disabilities in expressing their thoughts. Neuralink, the neurotechnology company founded by Elon Musk, has pursued medical implants enabling mind control of devices. However, the need for brain surgery has remained a barrier to wider adoption.
The increased accuracy of this new non-invasive technology could be a game-changer. For the first time, mind-reading technology seems feasible by combining two readily available technologies, although at a considerable cost. MRI machines currently range from $150,000 to $1 million.
Presently, data privacy laws do not consider thoughts as a form of data. Consequently, the need for new legislation arises to prevent thought crimes, thought data breaches, and even potential thought implantation or manipulation. Advocates from the University of Oxford argue for a legal right to mental integrity, safeguarding against significant non-consensual interference with one’s mind. Others propose a new human right to freedom of thought, extending beyond traditional definitions of free speech.
Without regulation, a dystopian future looms. Imagine a scenario where employers, teachers, or government officials invade private thoughts, or worse, manipulate and alter them. We have already witnessed the deployment of eye-scanning technologies in classrooms to monitor students’ attention. What happens when mind-reading technologies become the next invasive tool?
Similarly, in the workplace, if employees are forbidden from contemplating personal matters, including dinner plans, the potential for abusive control over workers becomes alarming and surpasses any previous imaginings.
George Orwell warned of the perils of “Thoughtcrime,” where thinking rebellious thoughts against an authoritarian regime becomes a criminal act. In his novel Nineteen Eighty-Four, officials relied on interpreting body language, diaries, and external cues to determine thoughts. With the advent of mind-reading technology, Orwell’s novel would be drastically condensed—perhaps a single sentence: “Winston Smith thought to himself: ‘Down with Big Brother’—after which he was arrested and executed.”
By Impact Lab