Get ready for the
EU AI Act

Article 12 of the EU AI Act requires that all AI systems be compliant with the Act. Simply answer a few questions about your AI system and we'll tell you if it's compliant.

Terminalsharelog-sdk v1.0.4

import { AILogger } from '@vita-tak/ai-logger'
import OpenAI from 'openai'

const aiLogger = new AILogger({
  systemId: 'your-system-name',
});

const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });

const client = aiLogger.autoLog(openai);

const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }]
});
1

Install SDK

Add to your project

npm install @vita-tak/ai-logger
2

Wrap Your Client

Simple integration

aiLogger.autoLog(openai)
3

Stay Compliant

Automatic logging

EU AI Act Ready
EU AI Act

Four risk levels - different obligations

Unacceptable risk

Prohibited

Forbidden uses: social scoring, manipulative AI, and real-time remote biometric identification in public spaces (with narrow legal exceptions).

High risk

Strict requirements

High-impact uses: biometric systems, CV-screening in hiring, education scoring, and critical infrastructure. Required: risk management, documentation, data governance, and human oversight.

Limited risk

Transparency

Transparency duties: chatbots, AI-generated content, and synthetic media. Users must be clearly informed when AI is involved.

Minimal risk

Voluntary codes

Low-impact uses: spam filters, recommendation features, and AI in games. No specific AI Act obligations beyond general law; voluntary codes are encouraged.

EU AI Act

Risk Assessment Quick Check

Find out whether your AI system may be subject to EU AI Act requirements.

0
Systems assessed
0%
High-risk
0%
Limited-risk
0%
Minimal-risk
Fast self-assessment
No registration
Clear outcome and gap report
EU AI Act

Frequently asked questions

What is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive AI law. It applies a risk-based model, where legal obligations depend on the risk level of the AI system.

What counts as high-risk AI systems?

Two groups are treated as high-risk: AI used in products covered by EU product-safety law (for example medical devices), and AI used in listed areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, and migration or border management. These systems must meet strict requirements.

What are the penalties for non-compliance?

Penalties are tiered and can be significant: up to EUR 35 million or 7% of global annual turnover for prohibited practices, up to EUR 15 million or 3% for other major obligations, and up to EUR 7.5 million or 1% for incorrect information to authorities. For SMEs and startups, the lower of the two amounts generally applies.

What does Article 12 require?

Article 12 requires high-risk AI systems to include logging capabilities that automatically record events. These logs support traceability, post-market monitoring, incident investigation, and regulatory checks.

How does our Risk Assessment Quick Check help your team?

It gives a fast first indication of your likely AI Act risk category and highlights key compliance gaps to review. It is a practical screening step, not a legal determination.

What are the rules for chatbots?

Chatbots are subject to transparency obligations. Users should be clearly informed that they are interacting with AI. AI-generated or AI-manipulated content must also be labelled where the law requires it.

Does the EU AI Act also apply to companies outside the EU?

Yes. If an AI system is placed on the EU market or its output is used in the EU, the Act can apply even when the provider is based outside the EU.