Admin
Total Screenings
High Risk
Escalated
Complete
Check rows to select
FeatureVersionDateOverall RiskEscalatedStatus
No screening results yet
Check rows to select
FeatureVersionDateRiskReasonStatus
No escalated reports yet
📄
Drop .md files here or click to browse
Loading…
📄
Drop .md files here or click to browse
One file per feature — plain text describing what it does
Loading…
1
What is ComplyAI and why does it exist?

ComplyAI is a compliance screening tool for product teams. Every time you build or update an AI feature, there are EU legal obligations you must meet — mainly around data privacy and AI regulation.

The four frameworks it checks are:

GDPR
How personal data is collected, processed and stored. Applies to any feature that touches user data.
EU AI Act
The EU's AI law. Classifies AI systems by risk and sets obligations — especially for high-risk AI or prohibited practices.
NIST AI RMF
A framework for managing AI risk across four areas: Govern, Map, Measure, Manage. Used as a best-practice benchmark.
ISO 42001
International standard for AI management. Covers risk assessment, data governance, human oversight and continual improvement.

Without ComplyAI, checking these rules required lawyers and took days. ComplyAI does the initial screening in minutes — so your team can catch problems before a feature goes live.

⚠️ Important: ComplyAI is a screening tool, not legal advice. If a report flags HIGH risk or triggers an escalation flag, a qualified lawyer must review before release.
2
How to run a screening — step by step

Click "← Screening" in the top right to go to the screening page. Then:

1
Prepare your documentation. Write up what your feature does in plain text or as a document. Include: what data it collects, where it goes, how long it's kept, who can see it, and what the AI does. The more detail, the better the report.
2
Upload or paste your documentation. Drag and drop a file (.md, .txt, .docx, .pdf) into the upload zone, or paste text into the input box at the bottom of the page.
3
Press Enter or click the send button. The pipeline starts immediately. You will see a live progress indicator showing which AI agent is working. A full screening typically takes 2–4 minutes.
4
Review the results. When complete, a summary card appears in the chat and the full report opens in the right panel. You can open the full report in a new tab, save as PDF, or export as Word.
5
Act on the findings. If overall risk is HIGH or the escalation flag is set — stop and get legal review before proceeding. See Section 3 for what each result means.
💡 Tip: To screen another feature, click "New screening" in the top right. This clears the current session so you start fresh.
3
How to read the report

Risk levels — every framework gets its own rating, plus one overall rating:

HIGH Serious compliance issues. The feature must not be released without legal review and likely significant changes.

MEDIUM Issues exist but are manageable. Address all recommendations before release.

LOW No major issues. Minor improvements may be suggested. Feature can likely proceed with standard review.

The four framework sections — each answers a different question:

GDPR
Did we handle personal data correctly? Is there a legal basis? Do users have rights? Is there a cross-border transfer issue?
EU AI Act
Is this AI prohibited or high-risk? Do users know they're interacting with AI? Is there meaningful human oversight?
NIST AI RMF
Is there accountability, bias evaluation, performance monitoring, and an incident response plan?
ISO 42001
Is there a documented risk assessment? Are data governance and human oversight controls in place?

Escalation flag — the most important signal in the report:

🚨 Escalation flag set means the feature has triggered a mandatory human review. This happens when: the feature processes special category data (e.g. health, biometric), the AI may be prohibited or high-risk under the EU AI Act, or a DPIA is required. Do not release without a qualified lawyer reviewing the report first.

Confidence score — shown as a percentage. Below 70% means the agents had missing information and could not make a fully confident assessment. Provide more detail and re-screen.

Recommendations are labelled by urgency:

Immediate — must be fixed before anything else. Blocks release.
Before release — must be completed before the feature goes live.
Post release — good practice improvements to complete after launch within a reasonable timeframe.
4
Admin — managing reports, knowledge base and features

Reports — shows every screening ever run. Click any row to open the full report. Hover a row to see the Remove button. Use checkboxes to select and bulk delete multiple reports.

Escalated — filters only reports where the escalation flag was set. Use this as your legal review queue — these features need sign-off before release.

Knowledge base — upload files that the AI agents read during every screening. Categories:

Regulatory base (GDPR, EU AI Act, NIST, ISO 42001) — upload the regulation texts or summaries as .md files. Agents search these for relevant excerpts during screening. The more precise your files, the more accurate the legal analysis.
Agent instructions — these override the default behaviour of each AI agent. Only change these if you know what you are doing — they are the system prompts that control how each agent analyses.
Business knowledge base (BKB) — upload a file describing your company's own red lines, scope limits, and intent statements. Example: "We do not process biometric data" or "Our product is B2B only, never consumer-facing." Agents will flag any feature that contradicts these.

Features database — upload .md files describing features that have already been screened and approved. Agents use these to understand cumulative risk — how a new feature fits into what already exists in the product. One file per approved feature.

💡 Best practice: After each approved feature, add it to the Features database. This gives agents a full picture of your product over time, making future screenings more accurate.
⚠️ All knowledge base files must be .md format. Create them in any text editor and save as filename.md. No special formatting needed — plain text works fine.