BRIDGE

Call for Submissions

There is a surge of new AI models and methods, yet many are adopted in healthcare without fully assessing their impact on patient safety and outcomes. Without alignment between innovation, evaluation, and regulation, these technologies risk remaining confined to research and never reaching clinical practice. We therefore need rigorous evaluation frameworks and effective regulatory processes to ensure AI in healthcare is safe, reliable, and deployment-ready.

If you are working on research related to AI safety, clinical deployment, or evaluation and regulation frameworks, then:

BRIDGE is the right place for your work!

We invite the submission of papers for topics related (but not limited to):

  • Robust evaluation methods and regulatory frameworks for AI-enabled medical devices
  • Studies revealing disconnects between AI development, evaluation metrics, and regulatory requirements
  • Theoretical or empirical analyses of gaps in current medical-AI evaluation practices
  • Algorithmic approaches designed for regulatory alignment from the outset
  • Position or perspective papers on open problems, negative results, or flawed practices affecting patient safety
  • Evaluation and regulation of generative AI, LLMs, and autonomous systems, and emerging technologies
  • Post-market monitoring strategies to ensure ongoing safety and effectiveness
  • Empirical assessments of AI readiness for real-world clinical deployment
  • Under-explored regulatory questions with direct patient-safety implications
  • Comparative studies of regulatory pathways for AI-based medical devices across regions (e.g., EU, US, Asia)
  • Perspectives on collaborative frameworks that facilitate global regulatory alignment and innovation-friendly evaluation
  • Frameworks for monitoring AI performance in the real world
  • Best practices for deploying continual learning systems under regulatory constraints
  • Studies addressing model drift, safety updates, and long-term monitoring strategies
  • Development of novel benchmarking tools, simulation platforms, or digital twins to support pre-market evaluation
  • Methods for aligning explainable AI (XAI) outputs with clinical interpretability and regulatory expectations
  • Studies on integrating clinical validation and end-user feedback in algorithm evaluation
  • Translational and lifecycle challenges, including case studies of research-to-clinic success or bottlenecks
  • Perspectives on early regulatory engagement strategies for startups, academics, and consortia
  • Comparative regulatory science across countries: lessons for global alignment

Proceedings

Accepted papers will be published in the MICCAI Workshops volume of Springer’s Lecture Notes in Computer Science (LNCS) series.

Paper Format & Submission

Submissions must be anonymized (double-blind), use the official Springer LNCS format , and be no more than 8 pages of main content plus up to 2 pages of references.

Submit your manuscript via OpenReview: OpenReview Submission Site .

Submission Evaluation Criteria

  1. Relevance: Alignment with AI evaluation, regulatory science, or deployment challenges in healthcare.
  2. Clarity & Structure: Logical organization, clear writing, and accessibility to a multidisciplinary audience.
  3. Empirical Rigor: Robust measurement or validation of new or existing concepts (for empirical work).

All submissions undergo a double-blind review. Please omit author names, affiliations, and self-identifying references. Use the official Springer LNCS format (Word & LaTeX templates provided).

Important Dates (Anywhere on Earth)

  • Full paper deadline: June 25, 2025 July 7, 2025
  • Notification of acceptance: July 16, 2025 July 25, 2025
  • Camera-ready deadline: July 30, 2025
  • Workshop date: TBA

Questions?

Reach us at BRIDGERegSci@gmail.com