Article 12 of the EU AI Act requires that all AI systems be compliant with the Act. Simply answer a few questions about your AI system and we'll tell you if it's compliant.
import { AILogger } from '@vita-tak/ai-logger'
import OpenAI from 'openai'
const aiLogger = new AILogger({
systemId: 'your-system-name',
});
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const client = aiLogger.autoLog(openai);
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
});Add to your project
npm install @vita-tak/ai-loggerSimple integration
aiLogger.autoLog(openai)Automatic logging
Forbidden uses: social scoring, manipulative AI, and real-time remote biometric identification in public spaces (with narrow legal exceptions).
High-impact uses: biometric systems, CV-screening in hiring, education scoring, and critical infrastructure. Required: risk management, documentation, data governance, and human oversight.
Transparency duties: chatbots, AI-generated content, and synthetic media. Users must be clearly informed when AI is involved.
Low-impact uses: spam filters, recommendation features, and AI in games. No specific AI Act obligations beyond general law; voluntary codes are encouraged.
Find out whether your AI system may be subject to EU AI Act requirements.
The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive AI law. It applies a risk-based model, where legal obligations depend on the risk level of the AI system.
Two groups are treated as high-risk: AI used in products covered by EU product-safety law (for example medical devices), and AI used in listed areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, and migration or border management. These systems must meet strict requirements.
Penalties are tiered and can be significant: up to EUR 35 million or 7% of global annual turnover for prohibited practices, up to EUR 15 million or 3% for other major obligations, and up to EUR 7.5 million or 1% for incorrect information to authorities. For SMEs and startups, the lower of the two amounts generally applies.
Article 12 requires high-risk AI systems to include logging capabilities that automatically record events. These logs support traceability, post-market monitoring, incident investigation, and regulatory checks.
It gives a fast first indication of your likely AI Act risk category and highlights key compliance gaps to review. It is a practical screening step, not a legal determination.
Chatbots are subject to transparency obligations. Users should be clearly informed that they are interacting with AI. AI-generated or AI-manipulated content must also be labelled where the law requires it.
Yes. If an AI system is placed on the EU market or its output is used in the EU, the Act can apply even when the provider is based outside the EU.