Compliance, EU AI Act, High-Risk AI
High-Risk AI in Healthcare: Conformity Assessment in Practice

High-Risk AI in Healthcare: Conformity Assessment in Practice
Why Healthcare AI Is High-Risk
Artificial intelligence systems used in healthcare are among the most heavily regulated under the EU AI Act. Any AI system intended to be used as a medical device, or to influence clinical decisions about individual patients, is classified as high-risk under Annex III.
This includes:
- Clinical decision support tools
- AI-assisted diagnostics (imaging analysis, pathology)
- Risk stratification and early warning systems
- Medication dosing recommendation engines
The Conformity Assessment Process
For high-risk healthcare AI, providers must demonstrate conformity before placing the system on the market. The process involves:
1. Technical Documentation
Prepare a comprehensive technical file covering:
- System architecture and training methodology
- Dataset characteristics (including bias analysis)
- Performance metrics and validation results
- Intended purpose and use conditions
2. Risk Management
Implement a continuous risk management system following ISO 14971 principles adapted for AI. Risks must be evaluated and mitigated throughout the system lifecycle, not just at launch.
3. Data Governance
Training, validation, and test datasets must be relevant, representative, and free from errors. Bias monitoring must be ongoing.
4. Post-Market Monitoring
Establish a post-market monitoring plan to detect issues after deployment, including an incident reporting process for serious incidents.
Working with Notified Bodies
Certain high-risk healthcare AI systems — particularly those also regulated as Class IIb or Class III medical devices — require assessment by an EU notified body. Early engagement with your notified body is strongly recommended.
AICI's healthcare compliance programme guides organisations through every stage of this process.