Silicon Valley Wants to Put a Chip in Your Brain

1 hour ago 6

There will come a time, in the not-so-distant future, when you decide to stick a computer chip in your brain.

At least, that’s what D. Scott Phoenix told the audience at TED 2026 in Vancouver last month.

“Someone you work with will get it first. And you’ll hold out for a while, the way you did with the smartphone. But eventually, you won’t,” said Phoenix, dressed in all black with a tiny mic attached to his ear. “The advantages of integration will be hard to compete with.”

Put bluntly, in his view, “We’re on the cusp of the next major transition, the merger of humans and AI.”

This perspective, as outlandish as it may sound, is commonly held in Silicon Valley. OpenAI CEO Sam Altman mused way back in 2017 that “a merge is probably our best-case scenario” for survival after the emergence of superhuman AI. Tech billionaire Peter Thiel is a vocal advocate of “transhumanism.”

There is good reason to be skeptical about an imminent evolution for the species. The technology to perform this kind of merger — to radically change what it really means to be human — remains in its earliest stages. Even setting aside the uncertain future of AI, the so-called implantable brain-computer interface (BCI) is still quite nascent.

And yet, there are huge sums of money already sloshing around the technology, with more to come. The BCI market, currently sitting at around $350 million, is expected to reach $1.2 billion by 2035, according to Future Market Insights. That doesn’t include companies like the one Phoenix founded, named Vicarious, which sought to use core principles in the brain to build AI that could act like humans. Phoenix, who is now a venture capitalist, sold his company to Alphabet in 2022, after it was funded to the tune of $250 million by investors including Elon Musk and Mark Zuckerberg. The broader neurotechnology market is projected to expand to $52 billion by 2032, according to the Neurorights Foundation.

With that much money at stake, not to mention the future of humanity, it should be no surprise that the political battle over our brainwaves is starting to heat up. That’s particularly true amid skepticism on both the left and right toward Silicon Valley’s relentless push for AI growth.

Thus far, the use case for implantable devices is largely medical in nature. Noland Arbaugh, the first human to receive an implanted BCI from the Musk-founded Neuralink, is paralyzed from the neck down but is now able to use a computer with his mind. (Neuralink has reported 21 clinical trials of implantable BCIs in humans as of January, after years of trials with primates that often ended in gruesome fashion.)


Michael Mcloughlin, gets a demonstration of Gemini on Android XR glasses at a Google I/O event in Mountain View, Calif., Tuesday, May 20, 2025.

There are also less intrusive forms of BCI; wearable technology like smart glasses, fitness wristbands or stress-tracking apps are already widespread. These products are lucrative for their makers not just because of how many people are buying them, but because of what can also be extracted from customers: the extremely valuable commodity of neural data.

Ownership of extensive neural data can be used to do anything from serve extremely targeted ads to surveil or manipulate consumers’ behavior. The tech billionaires who believe in the possibility of true human-AI integration may also see a chance to make some more money in the meantime.

“If data is the oil of the 21st century,” former UNESCO Director-General Audrey Azoulay wrote in November in the Financial Times, “then ‘brain’ data is the crude oil. We need to guard it more jealously.”

Resistance to private sector accumulation of neural data is growing quickly, with red states and blue states alike passing legislation to protect the privacy of neural information.

Within the burgeoning movement of neurorights advocates, there are real debates about how best to address companies worming their way into consumers’ brains. But there is broad opposition among advocates to the notion that AI and humans must become one for the species to survive — and broad concern about the private sector extracting neural data from consumers to speed that process along.

“That logic strikes me as very twisted,” said Susan Schneider, the director of the Center for the Future of AI, Mind and Society at Florida Atlantic University, adding that AI should be developed “in a way that protects privacy and promotes human flourishing.”

Ultimately the simmering debate over collecting neural data is about a lot more than just legislation on privacy. It hits at the very heart of what makes us human, now and in the future.


Late last year, the neurotechnology company Kernel — founded by Bryan Johnson, a tech entrepreneur who insists he can live forever — published its quarterly newsletter. Alongside new product announcements, CEO Ryan Field wrote, “If you’ve been dreaming about building models on brain data, we’ve got the best solution for high-quality data collection at scale and are actively developing more advanced capabilities.”

In an interview, Field said the company is selling technology that other companies can use to collect neural data and train large language models on. “[Our technology] gives very rich and powerful information you can use to train new models to do all kinds of different things,” he said.

Field said they don’t sell neural data at Kernel, but that they want to collect as much of it as they can with consent to build their own wearable devices that better measure cognitive health and activity. “I’m looking for people who will exchange their brain data in exchange for some kind of compensation,” he continued, noting their work is in the research domain and includes clear consent forms.


But as neurotechnology companies become more consumer-oriented and less focused on clinical trials, it’s not entirely clear what information they are allowed to collect from their users. A small group of lawyers, scientists and advocates are now trying to protect user data from being bought and sold without their consent. That is, if they can get on the same page with one another.

Regulation of neural data at any governmental level remains in its infancy, but some states have begun to take up measures to stop companies from having access to certain kinds of information.

In Colorado, California and Connecticut, legislators have amended existing privacy statutes to include information that is generated by a “consumer’s central or peripheral nervous system.” Montana has gone a step further, requiring consent, access, deletion and destruction obligations around any neural data. And Minnesota is in the midst of considering a broad, “neurodata rights” framework, which includes more than just an expansion of privacy protections.

At the federal level, Senate Minority Leader Chuck Schumer (D-N.Y.), along with Sens. Maria Cantwell (D-Wash.) and Ed Markey (D-Mass.), introduced the MIND Act last September. The bill would direct the Federal Trade Commission to study how neural data can reveal thoughts, emotions, or decision-making patterns — and how it and related data should be regulated.

Many of these proposals have been written in part by the Neurorights Foundation, a global advocacy group that reports funding from the Omidyar Network and the Alfred P. Sloan Foundation. The group, begun by neuroscientist Rafael Yuste in 2022, is trying to build regulations protecting brain data across Europe, Latin America and the United States. The foundation says they are currently working in nine other U.S. states to pass similar legislation to what they’ve already pushed across the finish line in California, Colorado, Montana and Connecticut.

The group is made up in large part by technologists who have no fundamental opposition to building neurotechnology but are also concerned about safety. They aim for a targeted approach to regulation: expanding existing privacy laws so as to protect consumers’ neural data, without stifling innovation.

“Neurotechnologies have wonderful and profound implications to enhance human flourishing,” said Stephen Damianos, the CEO of the Neurorights Foundation. “Without common sense regulations and safeguards, there is risk that humanity will never benefit from what these technologies have to offer. [That’s] because of heavy handed regulation that will come in to correct for harms that can and will occur because of enormous lack of public trust, because of scandals and actual instances of the technologies harming people.”

It’s an argument tailored for the industry and for the safety-conscious alike: Consider some reasonable safeguards now, so there aren’t angry mobs later. Damianos appears eager to head off the kind of caustic battles that have emerged over other AI-related issues like data center buildouts or LLM usage.

Not everyone agrees with the Neurorights Foundation’s approach.

Nita Farahany, a professor of law and philosophy at Duke and a leading scholar on emerging technologies, believes questions of neural data should be treated separately from other privacy issues rather than simply amending existing privacy law.

“The most intimate data is the data about what you’re thinking and feeling that could be gathered through neural data,” said Farahany. Even amid disagreement over how to treat data privacy issues more broadly, she said carving out distinct rules for what’s in our brain waves could be possible.

The Beinao-1, a semi-invasive brain-computer interface system, is displayed during a press conference at the Chinese Institute for Brain Research on March 19, 2026 in Beijing, China.

Others worry that advocacy aimed specifically at protecting our innermost thoughts is the wrong approach. After all, even if most people are not close to having an implanted chip in their brain, wearable neurotechnologies — from smartwatches to sleep- and stress-tracking rings — are here and corporations are already gobbling up that data.

“Neural data can’t really reveal our private thoughts at the moment, so why are we raising the alarm about this right now? Anything that neural data can really reveal today can be revealed through other means,” said Anna Wexler, the principal investigator of the Wexler Lab at the University of Pennsylvania, where she studies the ethical, legal and social issues surrounding emerging technology.

That doesn’t mean regulation isn’t needed, but that it should include information that’s already being collected by companies that specialize in wearable technology. “Maybe it’s worth creating new laws or new legislation, but that shouldn’t be specific to neural data,” Wexler said. “Maybe it should more broadly capture inferences about mental states.”

Those in the industry bristle at the notion of having to abide by a series of different state laws governing neural data.

Field, the CEO of Kernel, argues any regulation should be done at the federal level. “For innovation to happen in this space, we can’t be navigating 50 different legislative agendas,” he said. “Let’s get the right stakeholders involved, so that you have actual subject matter experts and not just science fiction enthusiasts writing laws.”

This approach echoes the Trump administration’s broader stance on AI, which critics say amounts to letting industry run wild. Proponents of establishing neural data restrictions, many of whom are scientists themselves, insist that the companies working on neural data products are using the idea of competition with China and a potential patchwork of state legislation as a cudgel to shut down any and all regulation.

The debate remains fluid, in part because the field is still so nascent. The back-and-forth over some of these proposals is even unknown to many working in the industry. Phoenix insisted that he broadly believed in privacy protections, but that he hadn’t heard of any of the specific state legislation around the brain.


In an interview with Ross Douthat last year, Thiel was notably hesitant when asked whether the human race should survive. He eventually said yes, before adding, “But I also would like us to radically solve these problems. And so it’s always, I don’t know, yeah — transhumanism. The ideal was this radical transformation where your human, natural body gets transformed into an immortal body.”

Thiel is not alone among tech titans who are increasingly talking about the idea that “humanity” might not look much like humankind as we know it.

“The next era of human is here,” Johnson, the anti-aging guru doing everything he can to his body to extend his lifespan, said in November. In January, Anthropic CEO Dario Amodei wrote, “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.”

This turns the already significant question of how best to maintain the privacy of our brains into an even more fraught discussion about the future of human life.

Phoenix says that the word “transhumanism” is not a particularly useful phrase, but he is absolutely advocating for the merger of man and machine, and argues that failure to do so would inevitably mean a powerful AI would destroy or enslave humans.

“We either get on the train, or we are left behind in a way that’s profoundly bad for us,” he said. “I don’t think we are going to be able to control a God brain. I think we have the opportunity to humanize it.”

These ideas have produced fierce opposition from across the political spectrum.

“You’ve demoralized an entire generation, and told them that they can look forward to basically being pets to the machines or to billionaires with machines,” Joe Allen, a social conservative and contributor to Steve Bannon’s War Room, said in an interview last year. “If that actually comes true, nightmare.”

A protester holds a placard stating that 'Musk murders monkeys' at his company Neuralink during the demonstration. Protesters gathered outside the Tesla Centre in Park Royal as part of the Tesla Takedown Global Day of Action against Tesla and Elon Musk.

Plenty of critiques come from the left as well, with some arguing that the full-throttled embrace of AI only benefits a small group of the tech elite. “You think they’re staying up nights worrying about working people and how this technology will impact those people?” Sen. Bernie Sanders (I-Vt.) said recently. “They are not. They are doing it to get richer and even more powerful.”

In general, AI accelerationists have a lot going for them at the moment. They have an ally in the White House, and an almost unlimited war chest ahead of the midterms. But public opposition to AI is real; an April POLITICO poll showed that only 13 percent of people believe the government should not regulate AI at all — a number that’s largely consistent across party affiliation.

According to Silicon Valley’s leading evangelists, it’s only a matter of time before chips are implanted in all of our brains. Perhaps they are fooling themselves, or perhaps they just see a chance to make a lot of money. But for many, this is their honest conviction.

For advocates and others concerned about privacy, no matter what the future looks like, it shouldn’t solely be determined by tech companies with a profit motive. Otherwise, humans lose a different kind of agency.

In fact, a political backlash to AI or to the massive collection of neural data could endanger the very dreams of those hoping to build a new world.

“Transhumanists that I know are very worried that their well-intentioned views on human flourishing could instead not be realized because of technosurveillance and human rights abuses,” said Schneider of the Center for the Future of AI, Mind and Society.

“Thought data is the most intimate and private data there is,” she added. “If and when abusive platforms gain control of our thought data and misuse it — and use it to manipulate our behavior unbeknownst to us — we will have ruined the very transhumanist prospect from flourishing.”


Read Entire Article