EU AI Act compliance
Regulation: (EU) 2024/1689 — Artificial Intelligence Act Last updated: 2026-04-28 · Version: 1.0
This page documents how BestCoder is designed to comply with the EU Artificial Intelligence Act ("EU AI Act"). It is a living document that we review at least quarterly and after each material change to the Service.
1. Our role under the AI Act
BestCoder operates as a provider of an AI-enabled developer tool. We do not operate or train any general-purpose AI model ("GPAI") — the models are operated by third parties (Anthropic, OpenAI, Google, Mistral, or self-hosted runtimes you control).
When you use BestCoder, you are typically a deployer of those upstream models, with us acting as a distributor of the access path.
2. Risk classification
The Service does not, on its own, perform any of the practices listed in Article 5 (prohibited AI practices) and is not a high-risk AI system as defined in Annex III. Specifically:
- BestCoder does not assess natural persons (no biometric classification, no emotion recognition, no social scoring).
- BestCoder does not influence access to essential services (no credit scoring, no insurance underwriting, no employment decisioning) on its own behalf.
- The Service is a developer tool used to write, audit, and deploy software.
Customers who use BestCoder to build high-risk AI systems remain fully responsible for their own compliance under the AI Act.
3. Transparency obligations (Article 50)
When you interact with the in-app AI assistant, the BestCoder UI labels every AI-generated message and every audit summary with a "Generated by AI · model · session" footer. Output is never presented as human authorship.
Generated artefacts (code, commit messages, audit reports) are versioned in your Git history; provenance is verifiable.
4. Governance and documentation
We maintain:
- An AI use register listing every AI model exposed by the Service, with provider, intended use, and known limitations.
- A risk log documenting incidents, near-misses, and mitigations.
- A model card index for every default model surfaced in the UI; available in the cockpit settings.
5. Human oversight
The Service is opt-in: no AI action mutates your repository, your infrastructure, or your billing without an explicit human click. The Agent operates under your local user permissions; it cannot escalate privileges.
6. Training data
BestCoder does not train, fine-tune, or evaluate any model on your code, your secrets, or your prompts. Our contractual clauses with upstream providers (and their no-train commitments) are documented in the DPA and the Subprocessors list.
7. EU AI Act conformity assessment (when applicable)
For customers deploying BestCoder as part of a high-risk system, we provide on request:
- A technical documentation pack describing inputs, outputs, and decision points exposed by the Service.
- Conformity assessment evidence covering the BestCoder code paths in scope.
- Logs sufficient to enable post-market monitoring (90-day hot, 12 months cold on Team plans; 7 years on Enterprise).
Contact legal@bestcoder.app to request the pack.
8. Reporting concerns
If you believe a BestCoder feature breaches the AI Act, write to dpo@bestcoder.app. We acknowledge within 72 hours and respond substantively within 14 days.