EQTY Lab

Updates

Verifiable AI Neuroscience Research with MedARC, Stability AI, and Princeton

  • Mon 16 Dec 2024

Share

Key Takeaways:

  • MindEye2, a collaboration between MedARC and Princeton University, advances AI brain-computer interfaces and underscores the need for new trust standards as these technologies move into sensitive, large-scale medical use.

  • EQTY Lab's Verifiable Compute framework adds new cryptographic guarantees to MindEye2 that certify the model build, provenance, and integrity of the research while adding new layers of privacy.

  • Key integrations enable compliance with emerging regulatory requirements for AI governance.

MindEye2, from here on “MindEye,” advances upon the original MindEye Research.

EQTY Lab

Bridging Neuroscience and AI

MedARC works at the intersection of neuroscience and artificial intelligence in collaboration with Stability AI and Princeton University. Together with EQTY Lab, research teams added Verifiable Compute to enable a new essential layer of integrity for neuroAI. 

MindEye demonstrates remarkable capabilities in reconstructing visual perception from fMRI brain data. As this research proves the viability of processing real individuals' brain data, establishing rigorous frameworks for trust and verification becomes crucial for transitioning from laboratory research to real-world applications. 

Advancing Brain-Computer Interfaces: The MindEye Project

MedARC’s MindEye project is led by founders Dr.Tanishq Abraham, Research Director at Stability AI, and Dr. Paul Scotti, Head of NeuroAI at Stability AI. The project aims to give researchers the capability of peering into the minds of patients by reconstructing visual perception from brain activity captured through fMRI scans. This achievement opens new possibilities for understanding how our brains process and store visual information, with potential applications ranging from medical research to novel forms of human-computer interaction.

EQTY Lab

MindEye2: https://medarc-ai.github.io/mindeye2/#results
Figure 4: Reconstructions from different model approaches using 1 hour of training data from NSD (arXiv:2403.11207)

However, the sensitive nature of brain data presents unique challenges. As this technology moves toward real-world applications, it will process some of our most personal information: our thoughts and neural patterns. This raises critical questions about data privacy, security, and ethics. The complexity of this research tests the boundaries of traditional research governance and integrity methods, which may not succeed in handling such sensitive information in AI systems.

Building Trust Through Verifiable Compute

EQTY Lab's Verifiable Compute technology with Intel and NVIDIA addresses these challenges by providing end-to-end verification for the latest MindEye2 algorithm’s complex pipeline. The system provides comprehensive attestations that document every step of the research workflow, making complex AI processes transparent and accountable. 

The promise of brain-computer interfaces can only be realized if individuals can trust that their brain data will be used under their control in a responsible manner and with proper safeguards. Verifiable Compute provides new cryptographic tools that can help build this essential trust.Dr. Tanishq Mathew Abraham, Founder of MedARC and Research Director at Stability AI

Starting from the preprocessed voxel data provided by the Natural Scenes Dataset authors, each subsequent stage of MindEye’s workflow through to final image generation is captured by EQTY’s Integrity Fabric, which creates cryptographic attestations documenting the entire research process.

These attestations are automatically generated through Verifiable Compute, which runs the research workloads in secure Confidential Virtual Machines (VMs) on 5th Gen Intel® Xeon® Processors with Intel® Trust Domain Extensions (Intel® TDX), creating an end-to-end trust zone from our high-performance NVIDIA H100/H200 GPUs.

EQTY’s Lineage Explorer transforms these cryptographic proofs into an interactive visualization that maps MindEye's research pipeline. This allows researchers to track how brain activity data flows through various processing stages, including data preprocessing, model training, and image reconstruction. Every computational step, parameter choice, and data transformation is documented with tamper-proof cryptographic signatures, ensuring the integrity of the research process.

In practice, when MindEye maps fMRI data to CLIP image space, Verifiable Compute infrastructure creates attestations proving that the mapping occurred within a secure enclave, used the specified algorithms and parameters, and produced verifiable outputs. The Lineage Explorer then visualizes these attestations, allowing researchers to trace exactly how brain activity patterns were transformed into the latent representations used for image generation.

EQTY Lab

Critically, Verifiable Compute operationalizes trust by embedding governance into the development process. This allows downstream users—such as hospitals relying on models like MindEye2 for diagnostic imaging—to verify the authenticity and integrity of their deployed systems. 

By combining cryptographic provenance with host-level integrity, Verifiable Compute goes beyond securing the model itself; it creates a transparent framework that productionizes trust at every stage, ensuring that AI models can be deployed safely in environments where accuracy and accountability are paramount. 

Meeting Tomorrow's AI Standards Today

As AI research grows more complex and consequential, traditional verification methods no longer suffice. EQTY Lab's comprehensive approach addresses the key requirements for responsible AI development:

  • Research Explainability is achieved through EQTY’s Integrity Fabric and Lineage Explorer, which provide clear, cryptographic attestations documenting every step of the AI workflow. Unlike traditional documentation that can leave gaps in understanding, these tools create complete, verifiable records of the research process.

  • Beyond Basic Logging, the system creates tamper-proof cryptographic proofs that cannot be modified or manipulated. Traditional logs can be altered and often lack crucial details about the research process. Attestations protect evidence of how the research was conducted.

  • Verification Without Reproduction becomes possible through our cryptographic guarantees. As AI systems grow larger and more complex, the exact reproduction of results becomes increasingly difficult or impossible. Verifiable Compute provides trust through an alternative mechanism that is cheaper and faster.

  • Privacy Protection is built into the core architecture, ensuring sensitive data remains encrypted and protected while still enabling valuable research and verification.

  • Regulatory Compliance is assured through integration with hardware guarantees. In countries such as Germany this root of trust is already trusted for sensitive applications like Germany's electronic health records system. This proactive approach ensures compliance with emerging standards for AI governance.

Mitigating Research Risks with Verifiable AI Builds

The training and deployment of complex models like MindEye highlight a spectrum of vulnerabilities that extend across the AI development lifecycle. At a foundational level, practical risks include versioning inconsistencies, where outdated or improperly tracked weights might lead to unintended behavior, and weak access controls that expose sensitive weights to unauthorized modification or misuse. This is particularly relevant with Stable Diffusion models in which the generation process flows sequentially through a CLIP (text prompt encoder), U-NET (denoising), and the VAE decoder (converts the final denoised latent representation into the output image).

These risks are exacerbated by the complex supply chains typical of research collaborations, which often involve multiple pre-trained models, proprietary datasets, distributed infrastructure, and multiple stakeholders from researchers around the world. The lack of integrated governance across these elements makes it difficult to ensure accountability or to verify that models have not been altered, whether maliciously or accidentally.

Real-world usage of the models with human brain data exponentially magnifies the risks as AI supply chains expand to include external or third-party components. For instance, tampering with pre-trained weights, datasets, or checkpoints can subtly embed backdoors or biased behaviors that evade standard validation. These modifications are particularly insidious, as they can be surgically targeted to affect only specific scenarios or outputs while maintaining normal behavior otherwise. Beyond tampering, emerging threats such as side-channel attacks—exploiting hardware vulnerabilities to extract sensitive data or model parameters during training or inference—represent the frontier of adversarial capabilities. These risks make the assurance of provenance not just a best practice but an operational necessity for high-stakes deployments, such as in healthcare or critical infrastructure.

To address these challenges, Verifiable Compute offers a comprehensive security solution built on the SLSA framework at Level 3. By cryptographically notarizing every component and action in the model lifecycle, it creates an immutable record of provenance while enforcing strict access controls and leveraging confidential computing technologies. 

This approach ensures that every action—from the inclusion of a pre-trained checkpoint to the fine-tuning of weights—is recorded and verifiable, enabling organizations to detect and address anomalies such as tampering or supply chain compromises. Verifiable Compute also enforces rigorous access controls and integrates confidential computing technologies, reducing the attack surface for both internal and external threats. 

Through these capabilities, Verifiable Compute not only mitigates today’s practical risks but also anticipates the demands of a future where AI systems must be robust against increasingly sophisticated adversarial threats.

The Road Ahead: Democratizing Verifiable AI Research

In Q1 2025, qualified researchers will be able to access this verification infrastructure by applying to the Foundry Institute as we work toward broader open-source availability. By making research verifiable and trustworthy while protecting individual privacy, we're building the foundation for a future where brain-computer interfaces can safely and ethically enhance human capabilities.

Join the New Era of Trusted AI

Schedule a demo

Cookie Settings

We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you.