TL;DR
The EU AI Act becomes fully enforceable on August 2, 2026. Fines up to 35 million euros or 7% of global revenue. It applies to any company — including Mexican, Colombian, Argentinian, Brazilian — that offers AI systems used by people inside the EU. If your SaaS has EU customers, if your fintech scores users living there, if your health app processes EU patients, it applies to you. This article covers 20 questions to determine your exposure, the 6 key technical obligations, the self-hosted stack that simplifies compliance (Langfuse, on-prem Qdrant, NeMo Guardrails) and the remediation calendar to make the deadline.
Who is at risk and doesn't know it
The first misconception is believing the AI Act applies only to European companies. It does not. The regulation is extraterritorial in three concrete scenarios:
- The provider places an AI system on the EU market (even if established in LATAM).
- The system's output is used inside the EU.
- Subjects affected by the system are in the EU.
In practice this covers:
- Fintechs that score EU end users for credit.
- Healthtechs that process EU patient data.
- B2B SaaS with EU corporate customers using the AI features.
- HR platforms with European candidates.
- E-commerce with personalized recommendations consumed from the EU.
The second misconception is believing "the models are OpenAI / Anthropic, not mine". The AI Act defines the deployer (the operator of the system in production) as a regulated party independent of the model provider. You are responsible for how you integrate, what data you pass and how you use the response.
The 20 questions: does it apply to you?
Answer yes/no. Three or more "yes" in the risk section means you must initiate compliance now.
A. Territorial scope
- Does your product have end users in any EU/EEA country or the UK under reciprocal terms?
- Do you bill or sign contracts with companies based in the EU?
- Is your site/app available in EU languages and accept payments in euros?
- Do you process data of people residing in the EU (even without a signed contract)?
B. Nature of the system
- Does your product use AI models (LLMs, classical ML, vision, voice) in decisions that affect end users?
- Do those decisions influence access to credit, employment, education, healthcare, housing, justice, migration or contract execution?
- Do you use biometrics (facial, voice, fingerprint) for identification or authentication?
- Do you generate synthetic content (text, image, voice) that could be confused with human-made?
- Does your system perform scoring, classification or personalized recommendation of users?
C. Current governance
- Do you have documented which AI models each feature uses?
- Do you store traces with inputs, outputs and decisions for every inference (with auditable retention)?
- Do you have a human review/appeal process for automated decisions?
- Does your team know what an "high-risk system" is per the AI Act Annex III?
- Do you have a formal AI governance owner?
D. Data and training
- Did you train or fine-tune models with user data?
- Did you document the provenance and license of that data?
- Did you check for bias (gender, age, ethnicity) in model behavior?
- Do you have a process for users to request access/deletion of data used in inference?
E. Transparency
- Do you tell the user when they are interacting with AI (not only in T&C, but in the UI)?
- Do you clearly disclose when content was synthetically generated?
Risk classification: the 4 categories
The AI Act organizes systems into a pyramid:
| Level | Examples | Obligations |
|---|---|---|
| Unacceptable risk (prohibited) | Government-style social scoring, subliminal manipulation, emotion recognition in workplace/education | You may not operate it in the EU |
| High risk | Credit scoring, CV filtering, medical devices, biometrics | The full package: DPIA, registration, human oversight, traces |
| Limited risk | Chatbots, deepfakes, synthetic content | Transparency (inform the user) |
| Minimal risk | Spam filters, AI in video games | Voluntary |
Most LATAM SMB exporters fall into high risk or limited risk. The burdens are very different.
The 6 technical obligations that matter
For high-risk systems (art. 8-15 of the AI Act):
1. Quality management system (QMS)
Living documentation covering: architecture, models used, training datasets, performance metrics, bias tests, update procedures. Not a PDF — a versioned set of artifacts in Git.
2. Data and data governance
Datasets must be "relevant, representative, free of errors and complete." Requires documented provenance, statistical analysis and bias mitigation. For fine-tuning: clear license per source.
3. Technical documentation
A technical sheet that any authority can audit: architecture, design decisions, known limitations, metrics. Official template available in the European Implementation Guideline.
4. Automatic event logging
Each inference must leave an auditable trace for at least 6 months. Input, output, model version, user, timestamp. This is where self-hosted Langfuse becomes mandatory — you can't depend on LangSmith in the US if data must stay in the EU.
5. Transparency to the user
The end user must know they are interacting with an AI system, what decisions are made automatically and how they can appeal. In UIs: a visible component ("This assistant is AI") and a clear escalation path to a human.
6. Human oversight
Design that allows human intervention in material decisions. Not "a human looks at the log later"; rather "a human can change the decision before it takes effect."
Self-hosted stack for compliance
European firms sell "AI Act as a Service" at €300-500/hour. A LATAM SMB can build 80% of what it needs with free software and its own server.
| Requirement | Recommended OSS stack | Why |
|---|---|---|
| Inference logging | Langfuse self-hosted | Apache 2.0; 6+ months retention; full export for audit |
| EU data residency | Droplet/VPS in Frankfurt, Paris or Amsterdam | All major clouds offer EU regions |
| On-prem vector DB | Qdrant in the same EU region | Sensitive data stays in the EU |
| Guardrails (content, PII, topic) | NeMo Guardrails + Guardrails AI | Automatic block of policy-violating responses |
| Bias detection | Fairlearn + Aequitas | Battery of pre-deploy metrics |
| Living docs | MkDocs + Git repo | Auditable versioning |
| Consent / access | /user/{id}/ai-data GET and DELETE endpoints | Combined GDPR art. 15 + AI Act |
| Infra observability | Grafana + Prometheus | Evidence of uptime and performance |
The cost of a complete EU-resident stack is €120-180/month per environment. Compared with €30,000-80,000 for a last-minute audit + remediation, it's an order of magnitude better.
Remediation calendar
If today is April 2026 and you have high-risk exposure:
Weeks 1-4 (April-May)
- AI system inventory: which models, which data, which features depend on them.
- Formal risk classification.
- Gap analysis vs the 6 obligations.
- Appoint a responsible owner (internal or fractional).
Weeks 5-8 (May-June)
- Deploy logging stack in EU region.
- Migrate sensitive data to providers with EU data residency.
- Implement transparency in the UI (banners, notices, appeal paths).
- Write the first version of technical documentation.
Weeks 9-12 (June-July)
- Bias and robustness evaluations.
- Internal audit dry run.
- Close identified gaps.
- Train the team (legal + product + engineering).
Weeks 13-14 (last week of July)
- Change freeze.
- Confirm with legal advisor.
- Prepare authority-response packet.
Deadline: August 2, 2026. From that day on, any inspection or complaint can trigger sanctioning proceedings.
Common mistakes we see
- "But I don't sell to Europe directly." Your enterprise client who resells does. And that client will demand contractual compliance, just like they demand GDPR.
- "My model is OpenAI's, they are responsible." No. OpenAI complies as a foundation model provider; you comply as the system deployer.
- "Let's wait and see what happens after August." Fines apply from day one. There is no grace period for already-deployed systems.
- "We'll hire a European lawyer when the time comes." Audit and remediation timelines are 3-6 months. Hiring on July 15 is risky.
- "We have ISO 27001, that's enough." ISO 27001 covers information security. The AI Act requires ISO 42001 (AI Management System) or equivalent plus specific documentation.
Costs: what we see in real clients
| Company size | Scope | Remediation cost |
|---|---|---|
| Startup (<20 people), one product, limited risk | Transparency + basic logging | $5,000-15,000 USD |
| SMB (20-200), several products, at least one high-risk | QMS, docs, guardrails, oversight | $15,000-60,000 USD |
| Scale-up (>200), multiple high-risk systems | Full program + ISO 42001 | $60,000-250,000 USD |
Expected fine for non-compliance and inspection: minimum 2 million EUR + cessation of EU operations.
Remediation ranges measured across Numoru AI Act Diagnosis engagements. Post-inspection exposure calibrated to published GDPR + expected AI Act enforcement ratios.
- Remediation cost (USD)
- Realistic fine exposure (USD)
Numoru consulting data and public GDPR fine register (CMS DLA Piper GDPR Enforcement Tracker, 2025).
Business & commercial impact
Why the buying window is now, not after August
Unlike soft regulations that drift, the AI Act has a hard date baked into the statute. The only variables a LATAM exporter can control are scope, remediation quality and timing. Every week past April 2026 compresses delivery and pushes cost up. Companies that start in June are routinely quoted 2-3× the April rate because consultants, DPOs and auditors go on retainer with bigger clients first.
Public quotes Numoru and 4 partner boutique consultancies give a LATAM B2B SaaS with one high-risk use case. Prices rise as capacity dries up.
Numoru sales data + public quotes from 4 partner firms (2025-2026 Q1 sample).
Industries and ticket ranges
AI Act service pricing by buyer profile (Numoru, 2026)
Public benchmarks and enforcement references
European Commission — AI Act enforcement architecture
CMS DLA Piper — GDPR Enforcement Tracker
ICO — case studies of AI regulatory action
Illustrative case — LATAM B2B SaaS with EU customers
Colombian SaaS (scheduling + HR features) serving 14 EU enterprise customers
ROI calculator — AI Act remediation (SMB high-risk)
Mid-market LATAM exporter: remediation vs status quo (18 months)
| Diagnosis + remediation (one-time) | −$58,000 |
| Ongoing retainer (18 mo × $2,400) | −$43,200 |
| Internal eng time (~320 h × $95) | −$30,400 |
| Infra (EU droplet + Langfuse + Qdrant, 18 mo) | −$3,240 |
| EU ARR retained | +$1,900,000 |
| Gross margin on retained ARR | +$1,292,000 |
| Expected fine avoided | +$680,000 |
| Incremental EU deals (velocity) | +$420,000 |
| Net 18-mo contribution | +$2,257,160 |
Pricing tiers Numoru sells
- 20-question exposure checklist
- Risk classification per AI system
- Gap analysis vs 6 obligations
- Prioritized remediation roadmap
- 2-hour executive workshop
- Deliverable: 30-page PDF + Miro board
- EU-region OSS stack deployment
- QMS + technical docs in Git
- Bias + robustness testing
- Transparency UI + appeal workflow
- Team training (legal + product + eng)
- Mock internal audit
- Authority-response packet
- Quarterly documentation review
- New-feature AI-impact assessment
- Langfuse retention & audit readiness
- Regulatory change monitoring
- Annual mock-audit dry run
- Answer EU customer DPIA asks
Scale-ups with multiple high-risk systems or ISO 42001 scope: master contract from $180,000. Ticket grows with number of AI systems and EU revenue exposure.
What about other regulations
- ISO 42001 (AI Management System) — voluntary but useful certification; many EU tenders already ask for it as evidence of AI Act compliance.
- GDPR — still applies and intersects: personal data used in training requires a GDPR legal basis AND AI Act evidence.
- NIS2 — if you operate critical infrastructure, mandatory cybersecurity adds on top.
- Mexico / Brazil / Colombia — local frameworks are aligning; implementing the AI Act puts you ahead for when your country regulates.
FAQ
If I'm an independent consultant and use AI to write reports for EU clients, does it apply?As a personal tool, no. If you deliver automated outputs that influence the European end client's decisions, yes.
Does running models locally (Ollama, Llama) reduce my exposure?On data, yes — you don't leave your infrastructure. On compliance, no — you're still the deployer and must document the same.
Can I keep using Anthropic / OpenAI if I have EU customers?Yes, as long as you sign the corresponding DPAs and document that the provider meets model-provider obligations. Verify they offer an EU region for the endpoint.
Does the AI Act affect only generative AI?No. Classical ML (scoring, classification) is also covered when it falls into high-risk categories.
What documentation should I have ready on August 2?
- System inventory, 2) risk classification, 3) DPIA and AI Impact Assessment, 4) per-system technical documentation, 5) active logs, 6) UI with transparency, 7) oversight and appeal procedures, 8) incident response plan.
How do I start with zero governance today?With the 20-question checklist + gap analysis. Within a week you have clarity on scope and cost.
Next steps
If your company answers yes to more than 3 questions in section B and has EU exposure, the cost of waiting grows every week. Numoru's "AI Act Diagnosis in 10 days" service includes a complete gap analysis and a prioritized remediation plan with concrete OSS stack. The next article in this series details the technical implementation of auditable logging with Langfuse in an EU region.