This article provides general information about the EU AI Act and its application to biometric processing at events. It does not constitute legal advice. For compliance advice specific to your organisation, consult a qualified legal specialist in AI regulation.
Since the EU AI Act entered into force in August 2024, procurement teams at European enterprises have begun including AI Act compliance questions alongside the GDPR documentation they routinely request from event tech vendors. For event organisers who use AI face recognition for photo delivery or check-in, the Act introduces a compliance layer that operates alongside GDPR rather than replacing it.
This guide covers what the EU AI Act is, why face recognition at events falls into the high-risk category, what that classification means for you as an event organiser and what to require from your photo platform vendor before August 2026.
What the EU AI Act is and when it applies
The EU AI Act is the world's first comprehensive legal framework specifically governing AI systems - not just the data those systems process, but the systems themselves. Where GDPR regulates the processing of personal data, the AI Act regulates the development and deployment of AI technology based on the risk it poses to people's rights and safety.
The Act applies to two categories of organisation: providers (vendors who build AI systems) and deployers (organisations that use AI systems in their operations). It applies when an AI system is placed on the EU market or when it affects people located in the EU - which means it covers European events and events anywhere in the world where EU nationals attend.
Implementation is phased. Prohibitions on the highest-risk AI uses applied from February 2025. Obligations for high-risk systems under Chapter III - the category that covers face recognition - apply from August 2026. That deadline is close enough that organisations should be building compliant workflows now rather than waiting.
Why face recognition at events is classified as high-risk
The EU AI Act divides AI systems into four risk tiers: unacceptable risk (prohibited), high-risk, limited risk and minimal risk. Face recognition used to identify individuals sits in the high-risk tier by virtue of Annex III, Point 1(a), which lists "AI systems intended to be used for real-time and post remote biometric identification of natural persons" as high-risk systems.
This classification applies regardless of consent or beneficial purpose. An AI photo distribution platform using face recognition to match guests with their event photos is, as a matter of law, a high-risk AI system under the Act. The classification follows the capability of the technology, not the intent behind its use. This is the key distinction between the EU AI Act and GDPR: GDPR can be satisfied by obtaining valid consent; the AI Act imposes structural obligations on the system itself that consent alone cannot discharge.
The practical effect is that any event organiser deploying face recognition for photo delivery or check-in is operating a high-risk AI system and must meet the obligations that follow from that classification.
What high-risk classification means for deployers
As an event organiser, you are the deployer. The photo platform is the provider. The Act assigns distinct obligations to each role, and understanding the boundary between them is important for knowing what you are responsible for and what you can require from your vendor.
Your obligations as a deployer include:
- Use the system according to the provider's instructions: You cannot use a high-risk AI system in ways the provider has not sanctioned. If the platform is built and documented for event photo matching, using it for unrelated identification purposes would put you outside the scope of permitted use.
- Monitor for unusual outputs: Deployers must have a process for identifying when the AI system produces results that appear anomalous - for example, a high rate of incorrect photo matches reported by guests. You are not expected to audit the model, but you are expected to act on signals that it is not performing as documented.
- Maintain basic logs of use: The Act requires deployers to keep records sufficient to demonstrate that the system was used appropriately. For events, this means retaining records of consent collected and any human review actions taken.
- Inform attendees they are interacting with a high-risk AI system: The consent screen shown to guests before selfie capture should explicitly state that an AI identification system is in use. A well-designed consent flow handles this alongside the GDPR explicit consent requirement - the two obligations can be satisfied in a single screen.
- Implement the human oversight mechanisms the system provides: High-risk AI systems must include a mechanism for human review of AI outputs. As a deployer, you need to have this mechanism in place and be able to demonstrate that guests can dispute incorrect matches.
You do not need to conduct your own conformity assessment - that is the provider's obligation. But you do need to be able to show that you are using the system as directed and have implemented the oversight mechanisms it provides.
What to require from your photo platform vendor
EU AI Act vendor checklist for event organisers
- Conformity assessment documentation for the face recognition system (required under Chapter III from August 2026)
- Technical documentation covering model architecture, training dataset provenance and demographic accuracy benchmarks
- Evidence of accuracy testing across age, gender and skin tone groups (Annex IV requirement)
- Clear statement of system limitations and known failure modes
- Data Processing Agreement covering GDPR special category data
- Consent screen that explicitly states an AI identification system is in use
- Human oversight mechanism: a process for guests to dispute incorrect matches
Vendors who cannot provide conformity assessment documentation by August 2026 are operating a high-risk AI system without completing the required compliance steps. This is not a minor administrative gap - it is the central compliance obligation the Act places on providers of high-risk systems. Procurement teams should treat the absence of this documentation as a material risk.
The overlap with GDPR: two frameworks, one consent flow
The EU AI Act does not replace GDPR. Both frameworks apply in parallel and each addresses a distinct set of obligations.
GDPR requires explicit consent for biometric processing under Article 9(2)(a). The consent must be active, specific and informed - covering the purpose, the data controller, the retention period and the withdrawal mechanism. The EU AI Act requires transparency that an AI system is being used, so that individuals interacting with high-risk AI are not doing so without awareness.
A well-designed consent screen satisfies both frameworks in a single interaction. It captures explicit biometric consent under GDPR and informs the guest that an AI face recognition system is being used under the AI Act. These are complementary requirements, not duplicative ones. The practical design implication is straightforward: the consent screen must name the AI system and state its purpose alongside the standard GDPR consent elements it already needs to include.
Event organisers who have already implemented GDPR-compliant consent flows for biometric processing are well-positioned to meet the AI Act transparency requirement with a targeted update to their consent screen wording.
Timeline and what to do now
The August 2026 deadline for high-risk system obligations is the hard compliance date, but the work to reach it needs to start well in advance. The conformity assessment process for a face recognition system involves technical documentation, accuracy testing and risk analysis - it is not something a vendor can produce in a matter of days.
The actions to take now are:
- Request compliance documentation from your photo platform. Ask specifically for their EU AI Act conformity assessment status and their timeline for completing documentation ahead of August 2026. A credible vendor will have a clear answer.
- Update your event privacy notices. Privacy notices for events using face recognition should reference both the GDPR biometric processing basis and the AI Act classification of the system in use. Generic privacy notices that do not mention AI systems will need updating.
- Confirm consent screens are AI Act-compliant. Review the wording on the selfie consent screen with your legal team to confirm it satisfies both the GDPR explicit consent standard and the AI Act transparency requirement.
From 2027 onward, expect EU supervisory authorities and national market surveillance authorities to begin enforcement actions against non-compliant high-risk AI deployments. The Act's enforcement mechanisms include fines for providers of up to 3% of global annual turnover for failures to comply with high-risk system obligations - making this a material financial risk for vendors as well as a compliance risk for deployers.
AI Act-ready photo delivery for your European events
Eventiere provides full conformity documentation, explicit consent flows covering both GDPR and EU AI Act requirements and human oversight mechanisms on every deployment.
Book a free demo