All contributions
Industry Analysisai-actcomplianceregulation

EU AI Act: compliance guide for LATAM exporters before August 2, 2026

Fixed regulatory deadline: August 2, 2026. Fines up to €35M or 7% of global revenue. 20-point checklist, self-hosted stack for data residency, and remediation calendar for Latin American SaaS and fintech handling European data.

Numoru StrategyPublished on April 19, 202614 min read
Share

TL;DR

The EU AI Act becomes fully enforceable on August 2, 2026. Fines up to 35 million euros or 7% of global revenue. It applies to any company — including Mexican, Colombian, Argentinian, Brazilian — that offers AI systems used by people inside the EU. If your SaaS has EU customers, if your fintech scores users living there, if your health app processes EU patients, it applies to you. This article covers 20 questions to determine your exposure, the 6 key technical obligations, the self-hosted stack that simplifies compliance (Langfuse, on-prem Qdrant, NeMo Guardrails) and the remediation calendar to make the deadline.

Aug 2 2026
Full enforceability
Fixed deadline. No grace period.
€35M / 7%
Max fine
Of global revenue, whichever higher
3-6 mo
Typical remediation window
Hiring July 15 is reckless
€600M+
EU GDPR fines, 2023
Reference for enforcement intensity

Who is at risk and doesn't know it

The first misconception is believing the AI Act applies only to European companies. It does not. The regulation is extraterritorial in three concrete scenarios:

  1. The provider places an AI system on the EU market (even if established in LATAM).
  2. The system's output is used inside the EU.
  3. Subjects affected by the system are in the EU.

In practice this covers:

  • Fintechs that score EU end users for credit.
  • Healthtechs that process EU patient data.
  • B2B SaaS with EU corporate customers using the AI features.
  • HR platforms with European candidates.
  • E-commerce with personalized recommendations consumed from the EU.

The second misconception is believing "the models are OpenAI / Anthropic, not mine". The AI Act defines the deployer (the operator of the system in production) as a regulated party independent of the model provider. You are responsible for how you integrate, what data you pass and how you use the response.

The 20 questions: does it apply to you?

Answer yes/no. Three or more "yes" in the risk section means you must initiate compliance now.

A. Territorial scope

  1. Does your product have end users in any EU/EEA country or the UK under reciprocal terms?
  2. Do you bill or sign contracts with companies based in the EU?
  3. Is your site/app available in EU languages and accept payments in euros?
  4. Do you process data of people residing in the EU (even without a signed contract)?

B. Nature of the system

  1. Does your product use AI models (LLMs, classical ML, vision, voice) in decisions that affect end users?
  2. Do those decisions influence access to credit, employment, education, healthcare, housing, justice, migration or contract execution?
  3. Do you use biometrics (facial, voice, fingerprint) for identification or authentication?
  4. Do you generate synthetic content (text, image, voice) that could be confused with human-made?
  5. Does your system perform scoring, classification or personalized recommendation of users?

C. Current governance

  1. Do you have documented which AI models each feature uses?
  2. Do you store traces with inputs, outputs and decisions for every inference (with auditable retention)?
  3. Do you have a human review/appeal process for automated decisions?
  4. Does your team know what an "high-risk system" is per the AI Act Annex III?
  5. Do you have a formal AI governance owner?

D. Data and training

  1. Did you train or fine-tune models with user data?
  2. Did you document the provenance and license of that data?
  3. Did you check for bias (gender, age, ethnicity) in model behavior?
  4. Do you have a process for users to request access/deletion of data used in inference?

E. Transparency

  1. Do you tell the user when they are interacting with AI (not only in T&C, but in the UI)?
  2. Do you clearly disclose when content was synthetically generated?

Risk classification: the 4 categories

The AI Act organizes systems into a pyramid:

LevelExamplesObligations
Unacceptable risk (prohibited)Government-style social scoring, subliminal manipulation, emotion recognition in workplace/educationYou may not operate it in the EU
High riskCredit scoring, CV filtering, medical devices, biometricsThe full package: DPIA, registration, human oversight, traces
Limited riskChatbots, deepfakes, synthetic contentTransparency (inform the user)
Minimal riskSpam filters, AI in video gamesVoluntary

Most LATAM SMB exporters fall into high risk or limited risk. The burdens are very different.

The 6 technical obligations that matter

For high-risk systems (art. 8-15 of the AI Act):

1. Quality management system (QMS)

Living documentation covering: architecture, models used, training datasets, performance metrics, bias tests, update procedures. Not a PDF — a versioned set of artifacts in Git.

2. Data and data governance

Datasets must be "relevant, representative, free of errors and complete." Requires documented provenance, statistical analysis and bias mitigation. For fine-tuning: clear license per source.

3. Technical documentation

A technical sheet that any authority can audit: architecture, design decisions, known limitations, metrics. Official template available in the European Implementation Guideline.

4. Automatic event logging

Each inference must leave an auditable trace for at least 6 months. Input, output, model version, user, timestamp. This is where self-hosted Langfuse becomes mandatory — you can't depend on LangSmith in the US if data must stay in the EU.

5. Transparency to the user

The end user must know they are interacting with an AI system, what decisions are made automatically and how they can appeal. In UIs: a visible component ("This assistant is AI") and a clear escalation path to a human.

6. Human oversight

Design that allows human intervention in material decisions. Not "a human looks at the log later"; rather "a human can change the decision before it takes effect."

Self-hosted stack for compliance

European firms sell "AI Act as a Service" at €300-500/hour. A LATAM SMB can build 80% of what it needs with free software and its own server.

RequirementRecommended OSS stackWhy
Inference loggingLangfuse self-hostedApache 2.0; 6+ months retention; full export for audit
EU data residencyDroplet/VPS in Frankfurt, Paris or AmsterdamAll major clouds offer EU regions
On-prem vector DBQdrant in the same EU regionSensitive data stays in the EU
Guardrails (content, PII, topic)NeMo Guardrails + Guardrails AIAutomatic block of policy-violating responses
Bias detectionFairlearn + AequitasBattery of pre-deploy metrics
Living docsMkDocs + Git repoAuditable versioning
Consent / access/user/{id}/ai-data GET and DELETE endpointsCombined GDPR art. 15 + AI Act
Infra observabilityGrafana + PrometheusEvidence of uptime and performance

The cost of a complete EU-resident stack is €120-180/month per environment. Compared with €30,000-80,000 for a last-minute audit + remediation, it's an order of magnitude better.

Remediation calendar

If today is April 2026 and you have high-risk exposure:

Weeks 1-4 (April-May)

  • AI system inventory: which models, which data, which features depend on them.
  • Formal risk classification.
  • Gap analysis vs the 6 obligations.
  • Appoint a responsible owner (internal or fractional).

Weeks 5-8 (May-June)

  • Deploy logging stack in EU region.
  • Migrate sensitive data to providers with EU data residency.
  • Implement transparency in the UI (banners, notices, appeal paths).
  • Write the first version of technical documentation.

Weeks 9-12 (June-July)

  • Bias and robustness evaluations.
  • Internal audit dry run.
  • Close identified gaps.
  • Train the team (legal + product + engineering).

Weeks 13-14 (last week of July)

  • Change freeze.
  • Confirm with legal advisor.
  • Prepare authority-response packet.

Deadline: August 2, 2026. From that day on, any inspection or complaint can trigger sanctioning proceedings.

Common mistakes we see

  1. "But I don't sell to Europe directly." Your enterprise client who resells does. And that client will demand contractual compliance, just like they demand GDPR.
  2. "My model is OpenAI's, they are responsible." No. OpenAI complies as a foundation model provider; you comply as the system deployer.
  3. "Let's wait and see what happens after August." Fines apply from day one. There is no grace period for already-deployed systems.
  4. "We'll hire a European lawyer when the time comes." Audit and remediation timelines are 3-6 months. Hiring on July 15 is risky.
  5. "We have ISO 27001, that's enough." ISO 27001 covers information security. The AI Act requires ISO 42001 (AI Management System) or equivalent plus specific documentation.

Costs: what we see in real clients

Company sizeScopeRemediation cost
Startup (<20 people), one product, limited riskTransparency + basic logging$5,000-15,000 USD
SMB (20-200), several products, at least one high-riskQMS, docs, guardrails, oversight$15,000-60,000 USD
Scale-up (>200), multiple high-risk systemsFull program + ISO 42001$60,000-250,000 USD

Expected fine for non-compliance and inspection: minimum 2 million EUR + cessation of EU operations.

Remediation cost vs post-inspection fine exposure (USD)

Remediation ranges measured across Numoru AI Act Diagnosis engagements. Post-inspection exposure calibrated to published GDPR + expected AI Act enforcement ratios.

$0$3000k$6000k$9000k$12000kStartup (limitedrisk)SMB (onehigh-risk)Scale-up (multihigh-risk)
  • Remediation cost (USD)
  • Realistic fine exposure (USD)

Numoru consulting data and public GDPR fine register (CMS DLA Piper GDPR Enforcement Tracker, 2025).

Business & commercial impact

Business & commercial impact

Why the buying window is now, not after August

Unlike soft regulations that drift, the AI Act has a hard date baked into the statute. The only variables a LATAM exporter can control are scope, remediation quality and timing. Every week past April 2026 compresses delivery and pushes cost up. Companies that start in June are routinely quoted 2-3× the April rate because consultants, DPOs and auditors go on retainer with bigger clients first.

Price curve for 'AI Act Diagnosis + Remediation Plan' (USD, 14-day engagement)

Public quotes Numoru and 4 partner boutique consultancies give a LATAM B2B SaaS with one high-risk use case. Prices rise as capacity dries up.

Jan 26Feb 26Mar 26Apr 26May 26Jun 26Jul 26Aug 26$0k$15k$30k$45k$60k

Numoru sales data + public quotes from 4 partner firms (2025-2026 Q1 sample).

Industries and ticket ranges

AI Act service pricing by buyer profile (Numoru, 2026)

Fintech (credit scoring)
High-risk Annex III use case. Requires QMS + bias testing + human oversight.
$45,000 – 120,000
One-time remediation + $3-5k / mo
HealthTech (medical device AI)
Full technical docs, DPIA, oversight, logging. Notified-body path.
$80,000 – 250,000
One-time + 18 mo certification
HR-tech (CV screening)
Classifier used in recruiting — explicit high-risk category.
$35,000 – 90,000
One-time + quarterly review
B2B SaaS with AI features
Usually limited-risk. Transparency UI + logging + DPA updates.
$12,000 – 35,000
One-time + $1k / mo retainer
E-commerce personalization
Recommender systems — mostly limited risk. Transparency + appeal path.
$8,000 – 22,000
One-time
Enterprise reseller of LLMs
Deployer obligations for Claude / GPT / Gemini in client workflows.
$20,000 – 60,000
One-time + templates

Public benchmarks and enforcement references

Public case studyEU regulation · EU-27 · 2024

European Commission — AI Act enforcement architecture

Challenge
Define how national authorities and the AI Office will split enforcement of the AI Act.
Solution
Regulation (EU) 2024/1689 and the AI Office set up the notified-body and national market-surveillance structure. Enforcement rolls out in phases: prohibitions Feb 2025, general-purpose model rules Aug 2025, full enforcement Aug 2026.
Results
Max fine (prohibited)
€35M / 7%
Of annual global revenue
Max fine (high-risk)
€15M / 3%
Art. 99(4)
Max fine (info/transparency)
€7.5M / 1.5%
Art. 99(5)
Public case studyGlobal law firm · EU · 2024

CMS DLA Piper — GDPR Enforcement Tracker

Challenge
Benchmark how aggressively EU DPAs enforce comparable digital legislation (GDPR).
Solution
Tracker of every public GDPR fine since 2018. Used as a leading indicator for AI Act enforcement intensity.
Results
Total GDPR fines to date
€5.65B
Since 2018
Largest single fine
€1.2B
Meta (Ireland DPA, 2023)
Fines >€10M
100+
Across all sectors
Public case studyUK data regulator · UK / EU reciprocal · 2024

ICO — case studies of AI regulatory action

Challenge
Test how regulators prosecute AI systems even pre-AI-Act — and what evidence they demand.
Solution
Published investigations (Clearview AI, Snap MyAI, various HR-tech tools) foreshadow how EU market-surveillance will behave.
Results
Clearview AI fine
£7.5M
UK ICO, 2022
Enforcement actions on AI
25+
ICO + EU DPAs, 2020-2024
Evidence required
Docs, logs, DPIA
Consistent across cases

Illustrative case — LATAM B2B SaaS with EU customers

Illustrative caseHR-tech · 85 employees · $12M ARR · Colombia → EU

Colombian SaaS (scheduling + HR features) serving 14 EU enterprise customers

Baseline
Two AI features: smart scheduling (limited risk) and CV ranking (high risk, Annex III). No inventory, no logging outside SaaS vendors. Legal unaware. Three EU customers already asked for "AI Act compliance statement" in renewal RFPs.
Intervention
Numoru 14-day diagnosis → remediation plan. 4-month sprint: EU-region Langfuse + Qdrant deploy, QMS in Git, Fairlearn bias suite, transparency UI banners, oversight review workflow. ISO 42001-aligned documentation.
Projected outcome (12 mo)
EU ARR at risk
$3.1M
Before compliance
EU ARR secured
$3.1M + $780k
Renewals + 2 new enterprise deals
Remediation cost
$58,000
One-time + $2.1k / mo steady
Fine exposure avoided
~€1.5M
Realistic expected, per enforcement history
Time to close new EU RFPs
-42%
Compliance statement unblocks procurement
ROI
13×
Year 1 contribution vs cost
Cost anchored to Numoru engagements; EU ARR and RFP velocity numbers calibrated to Chief Legal Officer surveys (Gartner 2024, IAPP 2024). Synthetic case — not a specific Numoru client.

ROI calculator — AI Act remediation (SMB high-risk)

Mid-market LATAM exporter: remediation vs status quo (18 months)

Payback: < 2
Assumptions
EU ARR today$2,800,000
EU ARR at risk if no compliance$1,900,000
Blended EU gross margin68%
Probability of inspection (24 mo)25%
Expected fine, weighted$680,000
Deal velocity impact (new RFPs)+28%
Remediation scopeQMS + logging + bias + UI
Retainer after remediation$2,400 / mo
Diagnosis + remediation (one-time)−$58,000
Ongoing retainer (18 mo × $2,400)−$43,200
Internal eng time (~320 h × $95)−$30,400
Infra (EU droplet + Langfuse + Qdrant, 18 mo)−$3,240
EU ARR retained+$1,900,000
Gross margin on retained ARR+$1,292,000
Expected fine avoided+$680,000
Incremental EU deals (velocity)+$420,000
Net 18-mo contribution+$2,257,160

Pricing tiers Numoru sells

Diagnosis
$6,500one-time
14 days. Scope + risk + plan.
  • 20-question exposure checklist
  • Risk classification per AI system
  • Gap analysis vs 6 obligations
  • Prioritized remediation roadmap
  • 2-hour executive workshop
  • Deliverable: 30-page PDF + Miro board
Remediation sprint
$35,000 – 120,000one-time
8-16 weeks. Make the deadline.
  • EU-region OSS stack deployment
  • QMS + technical docs in Git
  • Bias + robustness testing
  • Transparency UI + appeal workflow
  • Team training (legal + product + eng)
  • Mock internal audit
  • Authority-response packet
Compliance retainer
$1,800 – 4,500/ month
After remediation. Keep it current.
  • Quarterly documentation review
  • New-feature AI-impact assessment
  • Langfuse retention & audit readiness
  • Regulatory change monitoring
  • Annual mock-audit dry run
  • Answer EU customer DPIA asks

Scale-ups with multiple high-risk systems or ISO 42001 scope: master contract from $180,000. Ticket grows with number of AI systems and EU revenue exposure.

What about other regulations

  • ISO 42001 (AI Management System) — voluntary but useful certification; many EU tenders already ask for it as evidence of AI Act compliance.
  • GDPR — still applies and intersects: personal data used in training requires a GDPR legal basis AND AI Act evidence.
  • NIS2 — if you operate critical infrastructure, mandatory cybersecurity adds on top.
  • Mexico / Brazil / Colombia — local frameworks are aligning; implementing the AI Act puts you ahead for when your country regulates.

FAQ

If I'm an independent consultant and use AI to write reports for EU clients, does it apply?As a personal tool, no. If you deliver automated outputs that influence the European end client's decisions, yes.

Does running models locally (Ollama, Llama) reduce my exposure?On data, yes — you don't leave your infrastructure. On compliance, no — you're still the deployer and must document the same.

Can I keep using Anthropic / OpenAI if I have EU customers?Yes, as long as you sign the corresponding DPAs and document that the provider meets model-provider obligations. Verify they offer an EU region for the endpoint.

Does the AI Act affect only generative AI?No. Classical ML (scoring, classification) is also covered when it falls into high-risk categories.

What documentation should I have ready on August 2?

  1. System inventory, 2) risk classification, 3) DPIA and AI Impact Assessment, 4) per-system technical documentation, 5) active logs, 6) UI with transparency, 7) oversight and appeal procedures, 8) incident response plan.

How do I start with zero governance today?With the 20-question checklist + gap analysis. Within a week you have clarity on scope and cost.

Next steps

If your company answers yes to more than 3 questions in section B and has EU exposure, the cost of waiting grows every week. Numoru's "AI Act Diagnosis in 10 days" service includes a complete gap analysis and a prioritized remediation plan with concrete OSS stack. The next article in this series details the technical implementation of auditable logging with Langfuse in an EU region.

Want results like these for your company?

Start a conversation
Share