The announcement of Elon Musk’s newest foray into the future, Neuralink, opens up a new chapter in one of humanity’s long-running dreams. What Neuralink proposes (and narratives like the recently-rebooted “Ghost in the Shell” have explored for decades) is a world in which the mind can be edited like software, changing memories, beliefs or personalities at the stroke of a keyboard. But we’ve learned a lesson from the thickening layer of computation in our lives, turning every toaster and toothbrush into a “smart” device: be careful what you wish for in networked intelligence.
Musk and other entrepreneurs like him are building business models around their assumptions about the human brain: that it has an operating system, and its language of signals can be represented computationally. Bryan Johnson, the CEO of a startup called Kernel, hopes to use technology similar to Neuralink’s to cure epilepsy and other brain disorders. He talks about the potential for “reading and writing neural code.” Musk spoke recently about increasing “the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” Both of these startups are focusing on a radical proposition: they want to write, not just read, in the language of the brain.
The vision is compelling and terrifying at the same time. Being able to correct the misfirings of our most enigmatic organ would be transformative for those who suffer from Parkinson’s, Alzheimer’s and a host of other diseases haunting the mind. On the other hand, consider the implications: if any brain can be rewritten, if all of us are running on the same intellectual code, what does that say for the notion of individuality, or identity, or free will itself? Once it’s possible to edit the mind, the bedrock of truth and lived experience will be irrevocably changed, at risk of silent manipulation on a profound level.
What these companies are working on is the first, crude attempt to build a computational input-output system for the mind. Musk calls the interface he is hoping to build a “neural lace,” borrowing the term from one of his favorite science fiction writers, the late Iain Banks. But science fiction writers, particularly the cyberpunk authors who inspired “Ghost in the Shell,” have been dramatizing the risks of versions of this idea for decades.
Neal Stephenson’s seminal novel “Snow Crash” is particularly evocative because it weaves the modern myth of the computational mind together with a much older story, the Tower of Babel. In “Snow Crash,” hackers are susceptible to a kind of linguistic virus written in the ur-language of the brain itself. When that novel’s Silicon Valley billionaire cracks the neural code, let’s just say he doesn’t use it to cure epilepsy.
Beyond the existential crisis of identity and experience, Stephenson’s novel spotlights one major risk if these companies succeed (which, by the way, they think will take decades of research). If there is a regular language of neurological signaling, and someone builds a protocol allowing for direct brain-computational connections, unauthorized manipulation of the brain becomes a serious risk. Viruses of the brain are no joke, and widespread adoption of brain-machine interfaces would inevitably lead to the kind of lax safety guidelines that now leave such intimate and important pieces of technology, such as baby monitors and cars, vulnerable to hackers.
At the same time, the broader promise of a direct machine interface for the brain is deeply compelling. If we could use code to change the brain, we would be fulfilling humanity’s romance with the power of language to change reality, and even more specifically, to reinvent ourselves. We have always wanted to work magic to make people fall in love, to become rich, to learn kung fu.
But there is something bigger at stake. The interfaces that Neuralink and Kernel aim for would, in the long run, be connection points not just for individual minds but across those minds. Consciousness is bizarre, mysterious… and solitary. We have long recognized the power of language to create empathy, to bridge (in simulation) the immeasurable gap between one island of consciousness and another. Imagine what it might be like to link directly not with a machine but with another mind, to communicate in a truly shared language. The forms of collective consciousness that we share now, from images to metaphors to melodies, are poor copies of copies compared to what might emerge out of a direct pathway to the neural code of the mind.
And if we really were harnessing the processing power between our ears to a global network of servers and clients, we would also be taking another incomprehensible step: unifying our imaginations, our processes of thought, with that of the algorithms that already filter our news, manage our finances, and suggest the perfect date. An awesome prospect in all the magnificent and terrifying sense of the word. For the individual, the fundamental process of thinking would change as radically as the search engine has changed research or GPS has changed travel. Each of us would possess god-like powers to summon information, to communicate, to share experiences. But ironically we would also be introducing a new anxiety, a very humbling problem to vex the lives of the superhuman cyborgs we might become: who’s really thinking this thought, you or the machine?