🌐Cognitive Privacy at Risk: The Next Frontier in AI and Data Protection
- christopherstevens3
- Jul 18
- 27 min read

🧠 Introduction
Artificial intelligence (AI) is converging with neuroscience, behavioral science, and emotion analytics. This convergence is creating a profound new frontier of privacy that transcends traditional identifiers and biometric data. This frontier is cognitive privacy: the right to mental integrity, psychological self-determination, and protection from intrusive technologies that access, infer, or manipulate our thoughts and emotions. Today, tools such as brain-computer interfaces (BCIs), emotion recognition systems, sentiment analysis platforms, and neuro-marketing applications do more than observe behavior. They strive to decode and influence our internal cognitive states (Ienca & Andorno, 2017; UNESCO, 2022).
Despite the pace of innovation, legal and regulatory systems remain largely unprepared. Traditional data protection laws such as the EU General Data Protection Regulation (EU GDPR) and Japan’s amended Act on the Protection of Personal Information (APPI) robustly regulate identifiable and sensitive personal data. However, they often lack specific protection for inferred mental states or neural data (European Union, 2016; Kim & Chang, 2023). Meanwhile, emerging AI models are passively creating cognitive profiles without the subject’s awareness or consent. They are undermining core data protection principles such as collection limitation, informed consent, and purpose limitation (Brey & Dainow, 2023).
This legal and regulatory lag raises critical questions: How should the law or regulation distinguish between observable behavior and inferred cognition? What constitutes mental autonomy in an era of neuro-invasive AI? Can existing legal and regulatory safeguards meaningfully protect individuals from emotional profiling, cognitive manipulation, or thought surveillance?
This article addresses these questions by examining the conceptual foundations, legal and regulatory gaps, and policy imperatives of cognitive privacy. It provides a global overview of legislative and regulatory efforts, assesses the ethical and technical challenges of emerging AI applications, and proposes a framework to align legal safeguards with evolving threats to mental integrity. To summarize the discussion, the following section introduces the key terms that shape the landscape of cognitive privacy.
Figure 1 visualizes how seemingly innocuous behavioral signals can lead to high-impact decision outcomes. It clarifies how AI-enabled systems generate and act on cognitive inferences that often bypass user awareness, understanding, and consent. The flow underscores the urgent need for regulatory safeguards and ethical oversight throughout the entire process of cognitive data exploitation.
Figure 1:

🗝️ Key Terms
To navigate the legal, ethical, and technological challenges posed by cognitive privacy, it is essential to understand the foundational terminology shaping this domain. The following key terms define the emerging tools, concepts, and data types implicated in the intersection of AI, neuroscience, and behavioral analytics. These definitions are supported by peer-reviewed research, regulatory guidance, or authoritative industry analysis. They establish a shared language for discussing the scope and impact of cognitive privacy.
Table 1 provides simplified definitions and real-world examples of key terms used throughout this article. These key terms are explained in greater detail later.
Table 1: Glossary of Core Terms in Cognitive Privacy, AI, and Data Protection
Term | Plain-Language Definition | Example / Use Case |
Brain-Computer Interface (BCI) | A device that allows the brain to send signals directly to a computer or machine. | A neurogaming headset that detects attention via EEG. |
Cognitive Privacy | The right to control access to your thoughts, feelings, and mental processes. | Laws that prevent companies from using brainwave data to profile users. |
Digital Phenotyping | The use of smartphones and sensor data to monitor behavior and infer cognitive or emotional states. | An app that tracks typing speed and social activity to assess mental health. |
Emotion AI | AI that detects or infers emotional states from voice, face, or physiological signals. | A hiring platform that analyzes facial expressions during interviews. |
Inferred Data | Information that AI predicts about you, such as your mood or political views, is gathered without you directly providing it. | An ad platform that infers you are anxious and sells you calming supplements. |
Mental Privacy | The right to keep your inner thoughts, emotions, and intentions private from digital systems. | A law banning emotion recognition in classrooms or the workplace. |
Neural Data | Brain-derived data collected from EEGs, implants, or other neurotech. | EEG signals showing focus level during a concentration test. |
Neural Data Privacy | The legal and ethical protection of brain data from unauthorized access or misuse. | Regulations require opt-in consent before collecting EEG data. |
Neurotechnology | Tools that interact with or monitor the nervous system to interpret or influence brain activity. | Devices used in therapy, productivity tracking, or consumer research. |
Sentiment Analysis | AI technique used to determine the emotional tone in text or speech. | Analyzing tweets to gauge public mood about a political issue. |
Source Note: Table 1 utilizes terminology drawn from peer-reviewed research, regulatory documents, and interdisciplinary literature on cognitive privacy, AI ethics, neurotechnology, and data protection. Definitions draw from sources including Ienca & Andorno (2017), Malgieri & Custers (2018), Magee et al. (2024), IBM (2023), and regulatory guidance from the European Union and UNESCO. This glossary is intended for educational and explanatory purposes and does not represent legally binding definitions.
While the glossary above provides simplified definitions and examples, the following section offers deeper context and formal explanations of each key term. These extended definitions are supported by academic research and regulatory frameworks to help clarify their significance within the evolving landscape of cognitive privacy.
🗝️Expanded Definitions and Contextual Foundations
The following section expands on the glossary by providing formal definitions of the key terms central to cognitive privacy discourse. Each term is situated within its legal, ethical, and technological context to clarify its contribution to the broader regulatory and policy landscape. These definitions are grounded in peer-reviewed scholarship, official guidance, and real-world applications. They offer a deeper understanding of how emerging technologies interact with neural data, mental autonomy, and AI-driven inference systems.
🌐Brain-Computer Interface (BCI): A brain-computer interface (BCI) is a neurotechnology that enables direct communication between a user's brain and external digital devices, bypassing conventional neural pathways. BCIs are being developed for medical treatment, military applications, and consumer technology (Becher & Glover, 2025; Ienca & Andorno, 2017).
🌐Cognitive Privacy: Cognitive privacy refers to the right to mental self-determination and psychological integrity. It protects individuals from technological intrusions that monitor, infer, or manipulate cognitive processes or internal states. This concept is increasingly discussed in the context of human rights, AI ethics, and neuro-law (Ienca & Andorno, 2017; Schiliro et al.,2020).
🌐Digital Phenotyping: Digital phenotyping involves continuous monitoring of behavioral and cognitive patterns through smartphone sensors and digital interactions. It is gaining traction in clinical psychology and mental health analytics. It is also raising concerns about cognitive privacy due to the sensitivity of inferred data (De Boer et al., 2023; Oudin et al, 2023).
🌐Emotion AI: Emotion AI, also known as affective computing, uses artificial intelligence to detect or infer emotional states from biometric cues such as facial expressions, vocal tone, or physiological signals. Its use in employment, law enforcement, and marketing has prompted criticism over accuracy and bias (Crawford, 2021; Somers, 2019).
🌐Inferred Data: Inferred data refers to personal data generated by algorithms that conclude users’ characteristics, intentions, or preferences without explicit input. These inferences are often opaque and not easily contested by users, raising concerns about consent, fairness, and accountability (Malgieri & Custers, 2018).
🌐Mental Privacy: Mental privacy (or cognitive privacy) is the right to maintain the confidentiality and autonomy of one’s internal thoughts, emotions, and cognitive states. It seeks to protect individuals from unauthorized technological access to or manipulation of their minds, particularly in contexts involving AI, emotion recognition, sentiment analysis, or neural interfaces. Mental privacy is increasingly framed as a fundamental human right in the context of neurotechnology and AI governance (European Parliament, 2024; Ienca & Andorno, 2017; Schiliro et al., 2020).
🌐Neural Data: Neural data refers to brain-derived information captured through neurotechnologies, such as electroencephalography (EEG), brain-computer interfaces (BCIs), or neural implants. It may reflect electrical activity, attention levels, emotions, or cognitive patterns. Neural data is often collected involuntarily and can reveal highly sensitive mental states, posing complex ethical and privacy risks (Becher & Glover, 2025; Kelley, 2025; Magee et al., 2024).
🌐Neural Data Privacy
Neural data privacy refers to the protection of brain-derived data. The data includes electrical activity, cognitive states, or neural responses, which is collected through neurotechnologies like EEG headsets or BCIs. It addresses the need to safeguard such data from unauthorized access, inference, or manipulation, given its deeply sensitive nature and the risk of psychological intrusion. Neural data is often generated passively and is challenging to anonymize, making it a uniquely vulnerable category of information (Kelly, 2025; Magee et al., 2024).
🌐Neurotechnology: Neurotechnology encompasses devices and systems that interface with or monitor the nervous system, such as neural implants, electroencephalography (EEG) headsets, and BCIs. These technologies are utilized in clinical, commercial, and military contexts, raising ethical concerns related to autonomy and cognitive liberty (Muller & Rotter, 2017).
🌐Sentiment Analysis: Sentiment analysis is a natural language processing technique used to determine the emotional tone of digital communications, including emails, social media posts, and product reviews. It is commonly used in marketing and customer service, but it can oversimplify complex emotional states (IBM, 2023).
🚨 The Rise of Technologies That Threaten Cognitive Privacy
The growing integration of AI with neuroscience and behavioral analytics has ushered in a new class of technologies that interact with the human mind in unprecedented ways. These tools go beyond traditional data collection. They attempt to access, infer, or manipulate cognitive and emotional states, often passively and without user awareness. As these innovations advance rapidly in sectors like employment, healthcare, education, law enforcement, and marketing, they raise urgent legal and ethical concerns. Below, we examine key categories of technologies that collectively challenge the boundaries of mental privacy and cognitive liberty.
😐 Affective Computing Systems: Affective computing, also known as emotion AI, refers to systems that detect and respond to human emotional states based on cues like facial expressions, voice intonation, and physiological signals (Dilmegani & Arslan, 2025). These systems are utilized in various sectors, including hiring and policing, customer service, and education. However, critics have raised alarms over their low accuracy, cultural bias, and susceptibility to misuse (Crawford, 2021). For instance, facial recognition tools often misclassify emotions across racial groups, amplifying discrimination in automated decision-making.
🧩 Brain-Computer Interfaces (BCIs): BCIs are neurotechnologies that enable direct communication between the human brain and external devices. Initially developed for medical use, these devices are now entering consumer markets through wearable headsets designed for gaming, productivity, and wellness. BCIs can detect attention, stress levels, or emotional states through EEG signals. This creates privacy challenges because such signals may be collected without users fully understanding the depth or implications of the data being inferred (Becher & Glover, 2025; Ienca & Andorno, 2017).
📊 Digital Surveillance and Mood Monitoring: In corporate settings, AI-driven mood monitoring platforms track worker behavior, including typing patterns and tone of voice. They are used to infer stress, fatigue, or morale. While companies argue these tools optimize performance, they also risk infringing on cognitive privacy, neural data collection and use, and psychological boundaries (Magee et al., 2024). The continuous assessment of emotional states can lead to over-monitoring, manipulation, and workplace discrimination (Cox, 2024; Freedman, 2024).
🛍️ Neuro-Marketing Platforms: Neuro-marketing tools combine biometric and neural data (e.g., eye movements, heart rate, brain activity, etc.) to assess consumer engagement. These platforms enable companies to optimize advertising strategies by targeting subconscious preferences. However, the lack of user control over how these insights are generated and deployed raises concerns about cognitive manipulation and autonomy (Neurons, 2025).
🧬 Sentiment and Personality Profiling: Sentiment analysis tools are widely used to assess the tone and emotional valence of online text or speech. When combined with behavioral profiling, these tools are used to create detailed cognitive and emotional profiles of individuals. Inferred personality traits are then used to personalize content, influence political messaging, or automate hiring decisions. The opacity of these systems and the limited recourse for correcting misclassifications exacerbate the risks (Binns, 2017; Malgieri & Custers, 2018).
🌍 Global Legislative Momentum: Mapping the Emerging Governance of Cognitive Privacy
As cognitive-inferential technologies rapidly evolve, countries vary widely in how they recognize and regulate cognitive privacy. Some jurisdictions have adopted robust protections, such as constitutional neuro-rights or AI-specific safeguards. Other jurisdictions rely on fragmented or sector-specific rules, and many have taken no formal action at all.
To provide a comparative overview of global readiness, the heat map below visualizes the status of cognitive privacy regulation across jurisdictions as of mid-2025. It categorizes countries into four tiers: Advanced, Emerging, Fragmented, and Absent. They are based on the presence, scope, and enforceability of laws governing neural data, inferred mental states, and AI systems that affect cognitive autonomy.
Figure 2: Global Status of Cognitive Privacy Regulation (as of 2025)

Table 2 defines the color categories used in Figure 2 to represent each country’s level of cognitive privacy regulation. These classifications reflect the presence, scope, and enforceability of legal protections related to neural data, inferred mental states, and emotion AI. Each tier is illustrated with representative country examples to support interpretation and comparative analysis.
Table 2. Color Legend for Global Cognitive Privacy Regulation Map
Color | Status | Examples |
🟩 Green | Advanced: Clear legal protections or constitutional rights | Chile, EU (AI Act), Colorado, California |
🟨 Yellow | Emerging: Partial or sector-specific protections proposed | Brazil (PL 522/2022), South Korea, Singapore |
🟧 Orange | Fragmented: Subnational or advisory efforts only | U.S. (state-level only), Canada (stalled bills) |
🟥 Red | Absent: No meaningful legal protections or proposals | India, much of Africa, the Middle East |
Source Note: Adapted from publicly available legislation, policy briefings, and legal analyses, including Future of Life Institute (2025), UNESCO (2023), OECD (2024), and national law firm reports.
While the heat map and legend provide a high-level visualization of global regulatory maturity, they represent only a snapshot of an increasingly dynamic legal landscape. To understand how cognitive privacy is gaining traction in practice, it is necessary to examine how individual jurisdictions are beginning to legislate or adjudicate issues. These issues are related to neural data, emotion recognition, and mental inference. The following section outlines notable efforts by countries and regions to regulate technologies that interact directly or indirectly with the human mind.
🌐Comparative Overview of National and Regional Legal Frameworks
Around the world, lawmakers are increasingly confronted with the need to regulate technologies that interact with the human mind. AI systems capable of inferring emotion, decoding attention, or manipulating behavior pose novel risks that current laws and regulations are often unprepared to address. While no jurisdiction has yet implemented a unified cognitive privacy regime, some are taking the first steps through AI laws, constitutional rights, or targeted data protection reforms.
🇧🇷 Brazil: Neurodata Reform Advancing, AI Law Omits Cognitive Privacy
Brazil is progressing on two parallel legal fronts:
📊Bill 2,338/2023, the country’s pending AI legislation, adopts a risk-based model but does not define or regulate neural data, cognitive inference, or emotion recognition (Covington & Burling LLP, 2024; Zanatta & Rielli, 2024).
📊PL 522/2022, a separate bill, proposes an amendment to the LGPD to define “neurodata,” brain-derived data processed by neurotechnology, as sensitive personal data. It mandates explicit, informed consent for its use. The Health Committee passed this bill in October 2023 and remains under review as of mid-2025 (Do et al., 2024).
🇨🇦 Canada: AI Legislation Abandoned, Cognitive Privacy Unregulated
Canada’s Bill C‑27, which included the Artificial Intelligence and Data Act and the Consumer Privacy Protection Act, died in January 2025 when Parliament was prorogued. As of mid-2025, neither bill has been reintroduced, leaving Canada without a modernized AI regulatory framework or data protection law (Werner, 2025).
📊PIPEDA, Canada’s current data protection law, remains in force.
📊There is no Canadian federal law that defines or protects neural data, emotion recognition outputs or inferred cognitive states.
📊While some Canadian provinces (i.e., Alberta, Ontario, Quebec, etc.) provide limited AI legal protections, there are no binding laws protecting cognitive privacy or mental privacy (Wall et al., 2025).
🇨🇱 Chile: First Country with Constitutional Neuro-Rights
Chile leads the world in cognitive privacy by embedding neuro-rights in its constitution (UNESCO, 2022), thereby protecting individuals from unauthorized access to or manipulation of their brain data. In a landmark 2023 case, the Supreme Court enforced these protections, ordering the deletion of brainwave data collected without consent (The Neurorights Foundation, 2023).
🇪🇺 European Union: Dual Safeguards via GDPR and AI Act
The EU now offers a two-layer protection model:
📊The EU GDPR protects health, psychological, and biometric data under Article 9. It also provides data subject protection from the use of automated decision-making, as outlined in Article 22. The EU GDPR does not explicitly include cognitive inferences or neural data.
📊The EU AI Act (Regulation 2024/1689) Article 5 bans emotion-recognition AI in workplaces and education (Future of Life Institute, 2025). It also prohibits systems that exploit psychological vulnerabilities and classifies emotional profiling tools as high-risk, requiring transparency and oversight (Future of Life Institute, 2025). The ban took effect on February 2, 2025 (Der Belen, 2025).
🇮🇳 India: No Legal Protections Yet, Calls for Neuro-Rights Growing
India’s 2023 Digital Personal Data Protection Act governs the collection, use, and disclosure of personal data. It does not provide provisions for inferred mental states, emotion AI, or neural signals. Scholars and advocacy groups continue to push for a dedicated neuro-rights framework, but no legislative proposals have been introduced (Garg, 2022).
🇰🇷 South Korea: AI Focus Expands
The implementation of South Korea’s Second Amendment to the Personal Information Protection Act Enforcement Decree enhances the rights of data subjects regarding automated decision-making (Lee & Ko, 2024). The Second Amendment, however, does not protect cognitive privacy and neural data. The Artificial Intelligence Basic Act (2024) established a legal framework to enhance competition while upholding ethical standards and fostering public trust (International Trade Administration, 2025).
🇸🇬 Singapore: Ethical AI Leadership, but No Cognitive Protections
Singapore’s AI Verify and Model AI Governance Framework demonstrate the country’s commitment to AI governance. Its eleven governance principles include “transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive and societal and environmental well-being” (Personal Data Protection Commission, 2025). Singapore’s Personal Data Protection Act also applies to the ethical, responsible, and trustworthy collection of personal data throughout the information lifecycle (Healy, 2024).
🌐 OECD & UNESCO: Ethical Vision, Legal Gaps
The OECD AI Principles and UNESCO’s AI Ethics Recommendation advocate for psychological autonomy, mental integrity, and protection from emotional manipulation. While influential, these instruments remain non-binding and rely on national governments for legal adoption (UNESCO, 2021; OECD, 2020).
🧠 Global Insight
Chile’s neuro-rights law and the EU AI Act’s (Article 5(1)(f)) emotion-recognition AI systems in workplaces and education mark the strongest global steps to date toward governing cognitive privacy (Future of Life Institute, 2025). Brazil may join this group if PL 522/2022 is enacted. Conversely, Canada’s abandonment of AI governance legislation demonstrates some countries have a long way to go. As neurotechnology and cognitive inference tools proliferate, legal and regulatory systems must evolve to treat mental data as a protected class and cognitive privacy as a human right.
🇺🇸 U.S. State-Level Legislative Momentum
As cognitive-inferential technologies evolve rapidly, a growing number of U.S. states have taken initial, but critical steps, to address the data protection risks associated with cognitive privacy, mental data, and neural data. These efforts mark the early contours of what could become a new layer of U.S. state-level digital rights protection in the United States.
Several states have begun to recognize neural and cognitive data as distinct and sensitive, signaling the emergence of an AI legislative trend that acknowledges the ethical complexity and societal impact of these technologies:
🧠California, through the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act, amended the CCPA in September 2024 to define a California consumer’s neural data as sensitive personal data (Hunton, 2024). The amendment provides California consumers with expanded control over their brain-generated data. It also provides them with the right to “request, delete, and restrict the sharing of their neural data (Khattar, 2024).
🧠Colorado’s H.B. 24-1058 190 amends the Colorado Privacy Act to provide neural data protection and classifies it as sensitive data. Colorado was the first U.S. state to regulate the use of neurotechnology and to protect data generated from a person’s brain, spinal cord, or nervous system via a device (Teme & Klosek, 2024).
🧠Connecticut has incorporated “inferences about mental health” into its definitions of sensitive data under its respective comprehensive data privacy laws. They extend protections to cognitive and neural data produced through behavioral analysis and profiling (James, 2025).
🧠Montana’s S.B. 163 amends the state’s Genetic Information Privacy Act by adding additional protections to the processing of neural data and genetic information (Cooley, 2025). Montana’s law exceeds the protection of neural privacy and rights seen in laws like California and Colorado (Cooley, 2025)
While no federal law currently protects cognitive or neural data in the United States, several states have begun enacting their legislative safeguards. These laws vary in scope, terminology, and enforcement mechanisms, resulting in a patchwork of protection for neural signals, inferred mental states, and emotion-related data.
Table 3 provides a comparative overview of key U.S. state laws as of mid-2025. It highlights how select states define neural and cognitive data, what types of consent models they require, and which unique provisions reflect early efforts to regulate neurotechnology and emotion-inference systems.
Table 3. Comparative Overview of U.S. State Cognitive Privacy Laws (as of 2025)
State | Neural Data Classified as Sensitive | Cognitive or Inferred Mental Data Protected | Consent Model | Notable Provisions |
California | ✅ Yes (via CCPA as amended by SB 1223) | ❌ Not explicit | Opt-out | Consumers may request deletion or restriction of neural data |
Colorado | ✅ Yes (via HB 24-1058) | ⚠️ Partially (e.g., inferred emotional states) | Opt-in | First state to regulate neurotech & brain-device data use |
Connecticut | ⚠️ Implicit via “inferences about mental health” | ⚠️ Partial mental profiling | Opt-in | Includes inferred mental health under sensitive data |
Montana | ✅ Yes (via SB 163) | ✅ Yes | Opt-in | Covers both neural and genetic information with strong safeguards |
Source Note: Table 3 compiled from publicly available legislative summaries and regulatory commentaries, including sources from Cooley LLP (2025), Teme & Klosek (2024), Khattar (2024), and the Future of Privacy Forum (2024). Data reflects laws enacted or in effect as of July 2025.
These statutes reflect a broader shift toward acknowledging that mental autonomy and cognitive integrity are integral to cognitive privacy. However, while these laws represent promising advances, they vary widely in scope, specificity, and enforcement strength. Most notably, few offer comprehensive frameworks to govern the inference, processing, or use of cognitive and neural data in real-world applications. Expressly, when collected by workplace monitoring, consumer profiling, or predictive analytics.
Moreover, most of these laws stop short of explicitly regulating non-observable or algorithmically generated cognitive inferences, which are increasingly used in sentiment analysis, emotional AI, and psychographic targeting. This gap underscores the urgent need for legal and regulatory models that classify cognitive and neural data as sensitive. They also need to address the methods of its generation and the contexts in which it is deployed.
Despite the accelerating convergence of AI, behavioral analytics, and neuroscience, most countries have yet to fully confront the profound legal and ethical implications of cognitive and neural privacy. While global data protection regimes, such as the EU GDPR, offer robust protection for personal and sensitive data, including health and biometric information. Unfortunately, the EU GDPR and similar laws and regulations fall short in protecting the cognitive and neural data of data subjects (Yang & Jiang, 2025).
The U.S. state-level momentum suggests that cognitive privacy is no longer a theoretical concern. It is a real-life policy issue unfolding jurisdiction by jurisdiction. States are laying the groundwork for stronger mental data rights. However, without national harmonization or comprehensive safeguards, individuals remain vulnerable to a patchwork of U.S. state laws, as there is no U.S. federal law in place.
📉 Why Existing Legal and Regulatory Frameworks Fall Short
Identifiability, notice, and consent are increasingly misaligned with the realities of cognitive and neural data inference. These principles were designed to regulate information that individuals knowingly share or that is directly linked to their identity. However, cognitive and emotional data are often not provided; they are inferred, predicted, or constructed by algorithms analyzing digital behaviors, facial cues, voice tone, and even neural signals (Ienca & Andorno, 2017; Malgieri & Custers, 2018).
This creates a critical mismatch between what data privacy and data protection laws and regulations protect and how cognitive, mental, and neural data are generated and used. Inferred cognitive data may be passively harvested from routine interactions. These interactions include typing rhythms, browsing habits, or biometric feedback. They are collected and processed without the user's explicit disclosure, understanding, or control (Magee et al., 2024). Consequently, these inferences evade the core requirement of consent, rendering the traditional opt-in model ineffective and obsolete in many high-risk contexts.
Even more troubling is the unverifiable nature of many cognitive inferences. Emotion recognition systems and psychographic profiling tools often rely on proprietary algorithms with opaque logic (Wright, 2020). These systems may predict attributes such as stress levels, trustworthiness, or attention span, but with questionable accuracy and potential cultural bias (Crawford, 2021). Individuals are rarely aware that these predictions are being made, let alone given the chance to challenge or correct them (Mantello et al., 2023). This opens the door to behavioral manipulation, algorithmic discrimination, and psychological nudging without transparency or due process.
In environments like hiring, advertising, law enforcement, and education, these flawed inferences can shape life opportunities, institutional treatment, or personal outcomes (Bustamante et al., 2022). Nevertheless, current legal and regulatory frameworks do not classify such inferred mental data as sensitive, nor do they impose auditability, explainability, or redress mechanisms. The result is a systemic regulatory blind spot where cognitive profiling can flourish unchecked (Binns & Veale, 2017).
To keep pace with these developments, data privacy and data protection laws and regulations must move beyond the binary logic of consent and identifiability. They must begin treating inferred mental states as legally meaningful and ethically consequential. They must not just treat them as data artifacts, but as proxies for our inner life, autonomy, and dignity.
📁 Real-World Cases and Enforcement in Cognitive Privacy
As cognitive privacy technologies transition from theoretical constructions to widely deployed tools, regulators are beginning to respond. From constitutional challenges to algorithmic hiring audits, jurisdictions worldwide test the boundaries of emotional AI, neural data surveillance, and psychographic inference. This section highlights a set of real-world enforcement actions and controversies that illustrate how lawmakers, regulators, and courts are beginning to address cognitive privacy risks in practice.
🔲 Enforcement Snapshots: Real-World Case Studies in Cognitive Privacy
🧠 European AI Act Ban (2025)
Jurisdiction: European Union
Policy Action: Emotion-recognition systems used in employment and education were banned under Article 5 of the EU Artificial Intelligence Act.
Impact: Established a legal precedent for treating emotion inference as an “unacceptable risk,” reinforcing mental integrity as a protected interest.
Citation: Future of Life Institute, 2025
🛡️ Oregon AG Guidance on Sentiment Profiling (2024)
Jurisdiction: Oregon, United States
Policy Action: The Oregon Attorney General issued guidance warning that personality and sentiment-based AI tools may violate state privacy and consumer protection laws if deployed without meaningful consent and safeguards.
Impact: Served as a proactive regulatory signal, raising awareness of emotional AI’s legal risk prior to formal legislation
.Citation: Rosenblum, 2024; Canter & Ponder, 2025
🖥️ xAI Hubstaff Controversy (2025)
Entity: xAI (Elon Musk’s artificial intelligence firm)
Issue: Staff were reportedly required to install Hubstaff surveillance software. It was capable of tracking activity, productivity, and emotional tone on personal devices.
Impact: Sparked resignations and public backlash; highlighted ethical and legal concerns around psychological surveillance and employee consent.
Citation: Adolphus, 2025; The Daily Beast
⚖️ NYC Local Law 144 Bias Audit Mandate (2023)
Jurisdiction: New York City, NY, United States
Policy Action: Local Law 144 mandates annual bias audits and applicant notification for automated employment decision tools, including those that utilize sentiment or psychographic profiling.
Impact: Set a U.S. precedent for mandatory algorithmic transparency in hiring.
Citation: Hilliard, 2023
As these examples show, legal and regulatory engagement with cognitive privacy is still in its early stages but is gaining traction. Together, these cases highlight a growing willingness by governments to address the ethical, legal, and civil liberties implications of emotion AI, neural surveillance, and behavioral inference, often in the absence of comprehensive national frameworks. The following section examines how jurisdictions worldwide are implementing or proposing cognitive privacy protections through constitutional law, AI regulation, and sectoral reform.
🛡️ Toward a Framework for Cognitive Privacy
Safeguarding cognitive privacy requires more than patchwork reforms. It demands systemic reimagining of how laws, regulations, and technologies interact with the human mind. AI systems will increasingly infer, predict, and manipulate mental and emotional states. Consequently, legal experts, policymakers, and regulators must move beyond legacy data privacy and protection models to construct a forward-facing legal and regulatory architecture grounded in cognitive liberty, mental autonomy, and psychological integrity.
⚖️ A robust framework for cognitive privacy should rest on five interlocking pillars:
📁Auditability and Accuracy Requirements: Ensure Scientific and Ethical Validity
Cognitive-inference tools, especially those used in hiring, education, or mental health, must be subject to independent testing, bias auditing, and scientific validation (Schwartz et al., 2022). The opaque and probabilistic nature of many emotion AI or sentiment analysis systems requires reproducible standards of accuracy and fairness (Schwartz et al., 2022). It is essential to prevent algorithmic harm and institutional misuse (Rigotti & Fosch-Villaronga, 2024).
🛍️Definition Expansion: Reclassify Cognitive and Inferred Mental Data
Laws must explicitly recognize inferred mental states, neural signals, and psychological profiles as categories of sensitive personal data. This type of information is often generated without user input but can reveal deeply intimate insights. They can range from mood and attention span to ideological leanings and emotional vulnerability. Clear legal and regulatory recognition is foundational to ensuring transparency, accountability, and enforceability (Ienca & Andorno, 2017).
📁Informed Consent Redesign: Make the Invisible Visible
Consent must evolve beyond static checkboxes or privacy notices. In cognitive data contexts, meaningful consent requires real-time transparency, plain-language disclosures, and context-aware opt-out options (Szoszkiewicz & Yuste, 2025). Meaningful consent is essential when mental inferences are generated from seemingly innocuous behaviors (Saksena et al., 2021). AI systems that analyze emotional tone, neural feedback, or personality cues must disclose the scope, purpose, and consequences of their analyses before deployment.
🌐Global Cooperation: Harmonize Protections Across Jurisdictions
Cognitive data flows transcend national borders. Without international alignment, weak laws and regulations in one region can become loopholes in another. Efforts by the OECD, UNESCO, and emerging coalitions, such as the Global Partnership on AI (OECD, 2025) could be leveraged to develop cross-border standards. These efforts should include shared definitions, redress mechanisms, and regulatory oversight for cognitive privacy and neurotechnology (Ramanathan, 2025).
📁Purpose Limitation and Prohibition: Protect Mental Boundaries in High-Stakes Contexts
Specific uses of cognitive inference, such as predicting employee loyalty, assessing criminal risk, or influencing voter behavior, should be classified as high‑risk or outright prohibited. Just as human rights frameworks restrict intrusive surveillance in sensitive domains, mental data should not be commodified or weaponized in contexts of significant power asymmetry. The EU’s AI Act’s Article 5 explicitly prohibits the inference of emotions or intentions in workplaces or educational settings (Future of Life Institute, 2025). Moreover, neuroprivacy experts argue that mental data governance must uphold cognitive liberty and privacy akin to fundamental human rights (Ienca & Andorno, 2017; Mineo, 2023).
🌐A Call to Action
Cognitive privacy is not just a technological or regulatory challenge. It is a civilizational one. Protecting the sanctity of thought, the right to emotional opacity, and the freedom to be mentally unobserved must become cornerstones of AI governance. We can only ensure that future innovation honors the full dignity of the human mind by acting now.
🔐 Conclusion: Securing the Sanctity of the Mind in the Age of AI
Cognitive privacy is no longer a theoretical or futuristic concern. It is the frontline of our digital rights struggle. In a world where algorithms can infer attention, detect emotions, and manipulate perception, the human mind has become the final frontier of surveillance and commodification. What was once intangible is now increasingly accessible by AI machines and systems. Nevertheless, global laws and regulations remain largely silent.
We stand at a historic inflection point. Just as earlier generations codified protections for physical integrity and informational privacy, ours must champion the right to cognitive privacy and mental integrity. Protecting the freedom to think, to feel, and to be unobserved in one's inner life must become a cornerstone of digital dignity.
The path forward is neither easy nor optional. As this article has shown, global frameworks remain fragmented, U.S. laws are uneven, and the ethical implications of neuro-AI systems are only beginning to emerge. Meanwhile, the tools of cognitive surveillance are already being deployed in schools, workplaces, courts, and clinics.
To address this growing power asymmetry between individuals and cognitive technologies, the world must move from passive awareness to proactive governance. This means reclassifying mental data as sensitive, demanding transparency and accuracy in emotional AI, outlawing high-risk cognitive profiling, and building international standards that treat the mind as a protected space, not a marketable asset.
We must resist the normalization of technologies that reduce thought to data and treat emotion as input for optimization. The mind is not code to be rewritten. It is the seat of autonomy, identity, and dignity. Recognizing that this is not just a legal reform, but a cultural and ethical awakening.
The question is no longer if cognitive privacy can be protected; It can be protected.The question is whether we, as a society, will have the courage to defend it. The decision will not only shape the future of data governance. It will define what it means to be human in the age of artificial intelligence.
🌐 Questions for Key Stakeholders
❓For Civil Society and Advocacy Groups:
How can public awareness of cognitive privacy risks be increased?
What best practices exist for defending mental autonomy in digital environments?
How should civil society engage in regulatory and standards-setting discussions?
❓For Companies and Developers:
Are our AI systems collecting or inferring cognitive data, even indirectly?
What transparency and disclosure mechanisms are in place for end users?
How do we audit and verify the accuracy and fairness of cognitive inferences?
❓For Legal and Ethics Professionals:
How can cognitive privacy be embedded into impact assessments and risk frameworks?
What legal recourse exists for individuals harmed by inaccurate or intrusive cognitive profiling?
Are current consent models sufficient to handle passive or inferred cognitive data?
❓For Policymakers and Regulators:
How should cognitive data be defined and categorized in law?
What thresholds or criteria justify classifying inferred mental states as sensitive personal information?
Should cognitive inference be explicitly restricted in certain sectors (e.g., criminal justice, employment)?
🌐References
1. Adolphus, E.D. (2025, July 3). Elon Must puts his AI company’s employees under surveillance. The Daily Beast. https://www.thedailybeast.com/elon-musk-puts-his-ai-companys-employees-under-surveillance/
2. Becher, B., & Glover, E. (2025, June 12). Brain-Computer Interfaces (BCI): explained. Builtin. https://builtin.com/hardware/brain-computer-interface-bci
3. Binns, R. (2017, May 4). Algorithmic accountability and public reasoning. Philosophy & Technology, 31, 543–556. https://doi.org/10.1007/s13347-017-0263-5
4. Brey, P., & Dainow, B. (2023, September 21). Ethics by design for artificial intelligence. AI Ethics, 4, 1265-1277. https://doi.org/10.1007/s43681-023-00330-4
5. Bustamante, C.M., Alama-Maruta, K., Ng, C., & Coppersmith, D.D.L. (2022, August 29). Should machines be allowed to “read our minds?” Uses and regulation of biometric techniques that attempt to infer mental states. MIT Science Policy Review. https://sciencepolicyreview.pubpub.org/pub/ogade56r
6. California Department of Justice. (2025). Legal advisory: California attorney general’s legal advisory on the application of existing laws to artificial intelligence. Office of the Attorney General. https://oag.ca.gov/system/files/attachments/press-docs/Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence.pdf
7. Canter, L., & Ponder, J. (2025, January 3). Inside privacy: State attorneys general issue guidance on privacy & artificial intelligence. Covington. https://www.insideprivacy.com/state-privacy/state-attorneys-general-issue-guidance-on-privacy-artificial-intelligence/
8. Cooley. (2025a, May 13). Montana on the brain: A bold step for neural privacy. https://www.cooley.com/news/insight/2025/2025-04-22-montana-on-the-brain-a-bold-step-for-neural-privacy
9. Cooley. (2025b, March 13). Unlocking neural privacy: The legal and ethical frontiers of neural data. https://www.cooley.com/news/insight/2025/2025-03-13-unlocking-neural-privacy-the-legal-and-ethical-frontiers-of-neural-data
10. Covington & Burling LLP. (2024, December 9). Key votes expected on Brazil’s artificial intelligence legal framework and cybersecurity constitutional amendment. https://www.cov.com/en/news-and-insights/insights/2024/12/key-votes-expected-on-brazils-artificial-intelligence-legal-framework-and-cybersecurity-constitutional-amendment
11. Cox, K. (2024, March 10). The ethics and privacy concerns of employee monitoring: Insights from data privacy expert Ken Cox. Cyber Defense Magazine. https://www.cyberdefensemagazine.com/the-ethics-and-privacy-concerns-of-employee-monitoring-insights-from-data-privacy-expert-ken-cox/
12. Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t
13. De Boer, C., Ghomrawi, H., Zeineddin, S., Linton, S., Kwon, S., & Abdulla, F. (2023, March 14). A call to expand the scope of digital phenotyping. Journal of Medical Internet Research, 25. https://doi.org/10.2196/39546
14. Der Belen, C.V. (2025, March 12). AI & the workplace: Navigating prohibited AI practices in the EU. Bird & Bird. https://www.twobirds.com/en/insights/2025/global/ai-and-the-workplace-navigating-prohibited-ai-practices-in-the-eu
15. De Valle, G. (2025, April 28). Neurotech companies are selling your brain data, senators warn. The Verge. https://www.theverge.com/policy/657202/ftc-letter-senators-neurotech-companies-brain-computer-interface
16. Dilmegani, C., & Arslan, E. (2025, July 3). Affective computing: In-depth guide to emotion AI in 2025. AIMultiple. https://research.aimultiple.com/affective-computing/
17. Do. B., Badillo, M., Cantz, R., & Spivack, J. (2024, March 20). Privacy and the rise of “neurorights” in Latin America. Future of Privacy Forum. https://fpf.org/blog/privacy-and-the-rise-of-neurorights-in-latin-america/
18. European Commission. (2024). EU Artificial Intelligence Act (Regulation 2024/1689). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
19. European Parliament. (2024, July). The protection of mental privacy in the area of neuroscience: Societal, legal, and ethical challenges. European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/STUD/2024/757807/EPRS_STU(2024)757807_EN.pdf
20. European Union. (2016). General Data Protection Regulation (GDPR), Regulation (EU) 2016/679. https://eur-lex.europa.eu/eli/reg/2016/679/oj
21. Francis, J. (2024, November). Anatomy of state comprehensive privacy law: Surveying the state privacy law landscape and recent legislative trends. Future of Privacy Forum. https://fpf.org/wp-content/uploads/2024/11/REPORT-Anatomy-of-State-Comprehensive-Privacy-Law.pdf
22. Freedman, Max. (2024, October 3). Spying on your employees: Better understand the law first. Business News Daily. https://www.businessnewsdaily.com/6685-employee-monitoring-privacy.html
23. Future of Life Institute. (2025). EU AI Act: Article 5 – Prohibited AI practices. https://artificialintelligenceact.eu/article/5/
24. Garg, I. (2022, January 7). The time is now for a “Neuro-Rights” law in India. Vidhi Centre for Legal Policy. https://vidhilegalpolicy.in/blog/the-time-is-now-for-a-neuro-rights-law-in-india/
25. Gregg, B. (2024, April 2). Federal Trade Commission expresses concerns about artificial intelligence monitoring employees. Boardman Clark. https://www.boardmanclark.com/publications/labor-employment-update/federal-trade-commission-expresses-concerns-about-artificial-intelligence-monitoring-employees
26. Healy, R. (2024). AI and Data Privacy in Singapore: Navigating PDPA compliance for responsible innovation. Formiti. https://formiti.com/ai-and-data-privacy-in-singapore-navigating-pdpa-compliance-for-responsible-innovation/
27. Hilliard, A. (2023, October 19). What is New York City local law 144? Holistic AI. https://www.holisticai.com/blog/new-nyc-local-law-144
28. Hunton. (2024, October 2). California amends CCPA to cover neural data and clarify scope of personal data. https://www.hunton.com/privacy-and-information-security-law/california-amends-ccpa-to-cover-neural-data-and-clarify-scope-of-personal-information
29. Ienca, M., & Andorno, R. (2017, April 26). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(5). https://doi.org/10.1186/s40504-017-0050-1
30. IBM. (2023, August 24). What is sentiment analysis? https://www.ibm.com/think/topics/sentiment-analysis
31. International Trade Administration. (2025). South Korea artificial intelligence (AI) basic act. https://www.trade.gov/market-intelligence/south-korea-artificial-intelligence-ai-basic-act
32. James, D. (2025, June 27). Connecticut amends privacy law: New rules for sensitive data, profiling, and consumer rights take effect July 1, 2025. Marashlian & Donahue. https://commlawgroup.com/2025/connecticut-amends-privacy-law-new-rules-for-sensitive-data-profiling-and-consumer-rights-take-effect-july-1-2025/
33. Kelly, G. (2025, February 17). Neural data privacy. Delaware Division of Legal Services Issues Brief. https://legis.delaware.gov/docs/default-source/publications/issuebriefs/neuraldataprivacyissuebrief.pdf?sfvrsn=10df8838_1
34. Khattar, P. (2024, November 19). Neural data and consumer privacy: California’s new frontier in data protection and neurorights. TechPolicy.Press. https://www.techpolicy.press/neural-data-and-consumer-privacy-californias-new-frontier-in-data-protection-and-neurorights/
35. Kim & Chang. (2023, March 2). Amendment to the PIPA passed by the National Assembly. https://www.kimchang.com/en/insights/detail.kc?sch_section=4&idx=26837
36. Klosek, J., & Tene, O. (2024, September 25). Colorado’s neural privacy law is a game-changer for tech. Goodwin. https://www.goodwinlaw.com/en/insights/publications/2024/09/insights-technology-dpc-colorados-neural-privacy-law-is-a-game
37. Lee & Ko. (2024, May 21). Concretizing rights of data subjects in the AI era: Implementation of the second amendment to the enforcement decree of PIPA. Asia Law. https://www.asialaw.com/NewsAndAnalysis/concretizing-rights-of-data-subjects-in-the-ai-era-implementation-of-the-second/Index/1982
38. Magee, P., Ienca, M., & Farahany, N. (2024, September 25). Beyond neural data: Cognitive biometrics and mental privacy. Neuron, 112(18), 3017-3028. https://doi.org/10.1016/j.neuron.2024.09.004
39. Malgieri, G., & Custers, B. (2018, April). Pricing privacy: The right to know the value of your personal data. Computer Law & Security Review, 34(2), 289–303. https://doi.org/10.1016/j.clsr.2017.08.006
40. Mantello, P., Tung-Ho, M., Nguyen, M.T., & Vuong, Q.H. (2023, July 19). Machines that feel: Behavioral determinants of attitude towards affect recognition technology – upgrading technology acceptance theory with the mindsponge model. Humanities and Social Sciences Communications. https://doi.org/10.1057/s41599-023-01837-1
41. Mineo, L. (2023, April 26). Fighting for our cognitive liberty. The Harvard Gazette. https://news.harvard.edu/gazette/story/2023/04/we-should-be-fighting-for-our-cognitive-liberty-says-ethics-expert/
42. Muller, O., & Rotter, S. (2017, December 13). Neurotechnology: Current developments and ethical issues. Frontiers in Systems Neuroscience, 11(13). https://doi.org/10.3389/fnsys.2017.00093
43. Neurorights Foundation. (2023). Chile leads on neurorights implementation. https://neurorightsfoundation.org/chile
44. Neurons. (2025). Neuromarketing: Definition, techniques, examples, pros & cons, tools. https://www.neuronsinc.com/neuromarketing#:~:text=What%20Are%20the%20Available%20Neuromarketing,software%20play%20a%20crucial%20role
45. OECD. (2025). Global Partnership on Artificial Intelligence. https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html
46. OECD.AI. (2024, May). OECD AI principles overview. https://oecd.ai/en/ai-principles
47. Oudin, A., Maatoug, R., Bourla, A., Ferreri, F., Bonnot, O., Millet, B., Schoeller, F., Mouchabac, S., & Adrien, V. (2023, October 4). Digital phenotyping: Data-driven psychiatry to redefine mental health. Journal of Medical Internet Research. https://doi.org/10.2196/44502
48. Ramanathan, M. (2025, April 26). The UNESCO draft recommendations on ethics of neurotechnology – A commentary. Indian Journal of Medical Ethics. https://www.doi.org/10.20529/IJME.2025.028
49. Rashid, A., & Peters, R. (2024, December). Report of the generative AI and natural language processing task force. State of Illinois Generative AI and Natural Language Processing Task Force. https://doit.illinois.gov/content/dam/soi/en/web/doit/meetings/ai-taskforce/reports/2024-gen-ai-task-force-report.pdf
50. Reuter, E. (2025, April 30). Senators call for investigation into BCI privacy. MedTechDive. https://www.medtechdive.com/news/senators-bci-brain-computer-privacy-ftc/746733/
51. Rigotti, C., & Fosch-Villaronga, E. (2024, July). Fairness, AI, and recruitment. Computer Law & Security Review, 53, 105996. https://doi.org/10.1016/j.clsr.2024.105966
52. Rosenblum, E. (2024, December 24). What you should know about how Oregon’s laws may affect your company’s use of artificial intelligence. Oregon Department of Justice. https://www.doj.state.or.us/wp-content/uploads/2024/12/AI-Guidance-12-24-24.pdf
53. Schiliro, F., Moustafa, N., & Beheshti, A. (2020, December 14-16). Cognitive privacy: AI-enabled privacy using EEG signals in the internet of things [Conference session] 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application, Virtual. https://doi.org/10.1109/DependSys51298.2020
54. Singapore PDPC. (2023). Model AI Governance Framework and PDPA guidance. https://www.pdpc.gov.sg
55. Somers, M. (2019, March 8). Emotion AI, explained. MIT Management Sloan School. https://mitsloan.mit.edu/ideas-made-to-matter/emotion-ai-explained
56. Schwartz, R., Vassilev, A., Greene, L., & Burt, A. (2022, March). NIST Special Publication 1270: Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1270
57. Szoszkiewicz, L., & Yuste, R. (2025, June 25). Mental privacy: Navigating risks, rights and regulation. EMBO Reports. https://doi.org/10.1038/s44319-025-00505-6
58. UNESCO. (2023, October 13). Chile: Pioneering the protection of neurorights. https://courier.unesco.org/en/articles/chile-pioneering-protection-neurorights
59. UNESCO. (2022). Recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
60. Teme, O., & Klosek, J. (2024, September 24). What’s in Colorado’s 1st-of-its-kind neural privacy law (Law 360). Goodwin. https://www.goodwinlaw.com/en/news-and-events/news/2024/09/announcements-technology-dpc-whats-in-colorados-1st-of-its-kindhttps://www.goodwinlaw.com/en/news-and-events/news/2024/09/announcements-technology-dpc-whats-in-colorados-1st-of-its-kind
61. The Neurorights Foundation. (2023). Neurorights in Chile. The UNESCO Courier. https://neurorightsfoundation.org/chile
62. U.S. Senate Committee on Commerce, Science, and Transportation. (2025, April 28). Cantwell, Schummer, Markey call on FTC to protect consumers’ neural data. https://www.commerce.senate.gov/2025/4/cantwell-schumer-markey-call-on-ftc-to-protect-consumers-neural-data
63. Wall, N., Reynolds, M., Jette, R., Himo, J., Nickerson, L., & Choudhry, M. (2025, April). Looking ahead: The Canadian privacy and AI landscape without Bill C-27. Canadian Privacy Law Review, 22(5). https://mcmillan.ca/wp-content/uploads/2025/04/CPLR_V22R5_Final_Secured_x.pdf
64. Werner, J. (2025, January 22). Canadian AI bill stalls as bill C-27 terminates in parliament. Babl. https://babl.ai/canadian-ai-bill-stalls-as-bill-c-27-terminates-in-parliament/
65. Wright, J. (2020, September 1). Suspect AI: Vibraimage, emotion recognition technology, and algorithmic opacity. Computers and Society. https://doi.org/10.48550/arXiv.2009.00502
66. Yang, H. & Jiang, H. (2025, March 28). Regulating neural processing in the age of BCIs: Ethical concerns and legal approaches. Digital Health. https://doi.org/10.1177/20552076251326123
67. Zanatta, A.F., & Rielli, M. (2024, December 10). The artificial intelligence legislation in Brazil: Analysis of the text to be voted on in the federal senate plenary. Data PrivacyBR Research. https://www.dataprivacybr.org/en/the-artificial-intelligence-legislation-in-brazil-technical-analysis-of-the-text-to-be-voted-on-in-the-federal-senate-plenary/