Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Neuroscience ›› Brain Computer Interfaces (BCIs)

Brain Computer Interfaces (BCIs)

by Charles Adjovu
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Brain-Computer-Interfaces-BCIs-article

The emergence of privacy risks and data ownership opportunities as we augment the brain.

What is a Brain Computer Interface (BCI)?

Brain-computer interfaces (BCIs) are interfaces for recording and processing neurological data and turning these data into an output, e.g., a signal to control an external device [1].

BCIs can be categorized based on two dimensions:

  1. actions based on brain activity, and
  2. invasiveness [1].

The benefits or use-cases of BCIs are:

  1. diagnosing medical conditions (e.g., depression),
  2. modulating brain activity to deal with neurological conditions, and
  3. improving accessibility for individuals with a disability through connection to external support devices such as a robotic arm [1].

What is neurodata?

Brain Computer Interfaces (BCIs)
Photo by Bret Kavanaugh on Unsplash

Neurodata is data about neurological activity [1]. Neurodata can be directly recorded, e.g., by a BCI, or indirectly recorded, e.g., an individual’s spinal cord [1]. Inferences on neurodata via AI/ML algorithms can infer an individual’s mood, physiological characteristics, and arousal [1]. Neurodata can personally identify an individual by itself or when paired with other data associated with the individual [1].

What are the privacy risks associated with BCIs?

Brain Computer Interfaces (BCIs). Privacy
Photo by Bernard Hermant on Unsplash

The privacy risks include:

  1. Unauthorized access to personal information and inferences on such data,
  2. The ability to infer conclusions about an individual even beyond their mental thoughts to their specific biology and preferences,
  3. The use of neurodata, by itself or in association with other personally identifying information, in decision-making by third parties concerning the individual without the individual’s consent or knowledge, and
  4. Use of neurodata for marketing purposes and selling goods or services [1].

Additionally, neurodata raises legal risks concerning Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) [1]. Neurodata would fall under concerns because neurodata would fall under HIPAA’s definition of personally identifying information, thus requiring entities that process neurodata to also determine whether they are a covered entity (e.g., a physician or hospital) under HIPAA or a third party that must comply with certain HIPAA regulations because of a business relationship with a covered entity [1, 2]. Generally, HIPAA does not apply to wellness companies that manufacture wellness devices [1].

Neurodata is subjected to the GDPR in Europe because neurodata can be considered personal data (health data or biometric data), thus requiring lawful grounds for processing an individual’s neurodata [1].

Some additional concerns also arise when determining whether fault lies with the BCI device user or the BCI device (e.g., in the case of a malfunction) in an incident where a BCI device user causes harm to another person or property [5].

What are the Governance or Technical Solutions for Data Ownership and Privacy?

Brain Computer Interfaces (BCIs). The picture of a person with a brain.Data
Photo by Claudio Schwarz on Unsplash

Some potential solutions to these privacy risks that can ensure data ownership for BCI device users include:

  1. Encryption: encrypt a user’s neurodata on the BCI device so that other people cannot decipher it. Additionally, the use of end-to-end encryption (E2EE) when neurodata is shared between a BCI device user and a third party or cloud server [1];
  2. Local-first software: ensuring that neurodata is stored locally on the user’s device, with permissions for cloud access from applications [5];
  3. Separation of data and compute (or edge computing): have BCI devices utilize edge computing so that BCI users do not need to share their data directly with a server (but can send their results to a cloud server) for inferences on their neurodata to be conducted with a AI/ML algorithm, [1, 3, 6];
  4. Access control layer: through blockchain technology, it is possible to use smart contracts to provide an access control and identity layer for neurodata that can prevent unwanted access of neurodata by third parties [3]; and
  5. Data cooperatives: BCI device users can create a cooperative to manage and govern their data, and can interact and provide a forum for stakeholders, including researchers, technologists, and users, to discuss ethical issues in sharing and using neurodata [6, 7]

References

  1. https://fpf.org/blog/bci-technical-and-policy-recommendations-to-mitigate-privacy-risks/
  2. https://fpf.org/blog/bcis-data-protection-in-healthcare-data-flows-risks-and-regulations/
  3. https://www.personal.ai/privacy
  4. http://learn.neurotechedu.com/introtobci/#ethics
  5. https://www.inkandswitch.com/local-first/
  6. https://polypoly.coop/en-de/FAQ/#polyPod
  7. https://www.midata.coop/en/cooperative/
post authorCharles Adjovu

Charles Adjovu

Recently admitted attorney looking for opportunities in intellectual property, privacy and data security, blockchain and cryptocurrencies, and legal technology. Researching the legal implications and applications of emerging technologies is one of my passions, and I am always looking for new projects and collaborators that work in this intersection. Other than researching the legal implications of emerging technologies, I am also studying the use of emerging technology to help the legal profession (i.e., legaltech), with my favorite use so far being Casetext's CARA. I currently actively manage Ledgerback, a member-driven digital platform for research, analytics and education in blockchain, decentralization, and cooperativism. I and my other co-founders decided to start Ledgerback to help Nevadans learn about the benefits of blockchain technology, and actively grow the literature in the field (with my interests generally falling on the legal and utility applications of blockchain).

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • Brain-computer interfaces (BCIs) are interfaces for recording and processing neurological data and turning these data into an output.
  • Neurodata can be directly recorded, e.g., by a BCI, or indirectly recorded, e.g., an individual’s spinal cord.
  • There are particular privacy risks associated with BCIs that might need the following solutions:
    1. Encryption
    2. Local-first software
    3. Separation of data and compute (or edge computing)
    4. Access control layer
    5. Data cooperative

Related Articles

AI agents are getting smarter — but can they truly work together? Meet MCP, the open-source protocol quietly reshaping how machines connect, collaborate, and get things done.

Article by Josh Tyson
What to Know About Model Context Protocol (MCP)
  • The article introduces MCP — a new way to help AI agents easily work with business tools.
  • It shows how MCP could change how we use software by letting AI control all our tools through one interface.
  • The article sees MCP as a big step toward building smarter, more flexible AI systems in the future.
Share:What to Know About Model Context Protocol (MCP)
5 min read

Why does AI call you brilliant — then refuse to tell you why? This article unpacks the paradox of empty praise and the silence that follows when validation really matters.

Article by Bernard Fitzgerald
The AI Praise Paradox
  • The article explores how AI often gives empty compliments instead of real support, and how design choices like that can make people trust it less.
  • It looks at the strange way AI praises fancy-sounding language but ignores real logic, which can be harmful, especially in sensitive areas like mental health.
  • The piece argues that AI needs to be more genuinely helpful and aligned with users to truly empower them.
Share:The AI Praise Paradox
4 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and