A GDPR-aligned AI project requires far more than “the AI works.” This article maps the seven concrete stages from PoC to production, aligned with KVKK Board decisions and the EU AI Act.
Why This Is Not An Ordinary Software Project
AI projects diverge from classical software projects in three ways: (1) training and context data is a personal data processing activity, (2) model decisions can affect individuals (GDPR Article 22 — automated decision-making), (3) explainability and audit trail are legal obligations. Together these three axes require the project to be designed from scratch with privacy in mind.
7-Step Checklist
Step 1: Data Inventory and DPIA
Categorize all data sources to be used for the AI project: personal data / special category / anonymous. Check whether a Data Protection Impact Assessment (DPIA) is required — per GDPR Article 35 it is mandatory for high-risk activities. Output: a 5-page DPIA report and data flow diagram.
Step 2: Legal Basis and Notice
The legal basis for data processing (consent / contractual necessity / legitimate interest) must be clear. Whether AI training falls under legitimate interest must be balanced. The privacy notice should explicitly mention “decision support via artificial intelligence”; if explicit consent is required, a separate flow is implemented.
Step 3: Architecture (Privacy by Design)
Data minimization, end-to-end encryption, role-based access, audit trail, and federation/edge-AI options are decided at the design stage. A dedicated access policy is defined for embeddings and vector stores.
Step 4: PoC Build + First Ethical Review
During the 4-8 week PoC, fairness, explainability, and robustness tests are conducted. Bias detection uses cross-demographic test sets. Output: model card, ethical impact report.
Step 5: Security Hardening
All risk categories are checked under the OWASP LLM Top 10 framework. Prompt injection, sensitive data leakage, and model abuse scenarios are simulated with red-team testing. PII redaction, output validation, and rate limiting are deployed.
Step 6: Production Launch and Governance
At deployment, model inventory, version history, audit trail, and alert system (drift, performance, cost anomaly) are stood up. The Data Protection Contact is brought into the process. The AI Governance Framework document is published.
Step 7: Continuous Monitoring and Re-evaluation
In production: monthly model performance + bias report, quarterly DPIA update, annual full GDPR audit. When a new data source is added, the privacy notice is updated.
The 12-Point Verification Card
- DPIA completed and approved by the data controller
- Privacy notice clearly states AI activity
- Consent flow (where required) kept separate
- Data minimization principle applied across the pipeline
- PII detection and redaction layer active
- Bias and fairness tests documented
- Model card published
- Explainability (SHAP/LIME) report available
- Audit trail (who, what, when) retained
- AI Governance Framework document signed
- Data Protection Contact engaged in the process
- Re-evaluation calendar (at least annual) defined
Protecting the ROI
The cost of GDPR non-compliance can exceed the cost of the project itself. Considering that the average administrative fine in 2025 KVKK Board decisions approached 1.8 million TL, this should be positioned not as “compliance cost” but as “non-compliance avoidance investment”.