MIT's AI risk database: What it is and how to use it

If you're a quality and regulatory professional in MedTech, managing AI-related risks is essential for safe and compliant device development.

MIT’s new database organizes 777 AI risks into a simple framework.

It provides a structured way to identify and address these risks, ensuring the integrity of AI-driven MedTech solutions.

Here’s how I leverage it for risk assessments…

Imagine you’re part of a quality team at a company developing an AI-enabled diagnostic tool to help radiologists identify early signs of lung cancer.

Here’s how the AI Risk Repository can support your risk assessment:

1. Mapping AI Risks: Start by using the Domain Taxonomy to address AI System Safety & Failures and Human-Computer Interaction. Identify risks like potential misinterpretation of images or the AI’s influence on clinical decisions, which could overshadow human judgment.

Use sheet "Domain Taxonomy of AI Risks v1"

2. Lifecycle Risk Assessment: Use the Causal Taxonomy to assess risks across stages. For pre-deployment or pre-production, examine training data integrity—does the AI-enabled device perform equally well across diverse demographics? For post-deployment, focus on data security to protect patient privacy.

Use sheet "Causal Taxonomy of AI Risks v1"

3. Audit and Compliance: Benchmark these findings against FDA and ISO standards (13485 and 42001) and refine your strategy for model validation and monitoring.

ISO 13485 and 42001

By leveraging the AI Risk Repository, your team can enhance risk management and regulatory readiness for safer, more reliable AI solutions.


Looking for help completing a risk assessment? Reach out to me, greg@stacksafe.ai​

Risk Matrix based on RPN

Previous
Previous

Hyper growth of AI-enabled wearables

Next
Next

ECG Monitoring using Artificial Intelligence