Most event photographers think of their job as ending when the last card is handed over. Most event organisers think of photo distribution as something that happens a few weeks later, when the edited images finally arrive in a shared folder. The gap between those two assumptions is where event photo ROI quietly disappears.
The processing pipeline that powers AI photo distribution is built to collapse that gap entirely, from a photographer uploading their first batch of images mid-event to every guest having a personalised gallery link in their inbox before the night is out. Here is how that pipeline actually works, step by step and why each stage matters.
Stage 1: The upload queue
The pipeline begins the moment a photographer tethers their camera, opens their laptop, or uses the Eventiere uploader app on-site. Photos do not need to be edited, culled, or renamed. The system accepts raw JPEG exports directly from Lightroom, direct card imports, or wireless tethering output.
Uploads are queued and prioritised in order of arrival, the first batch a photographer uploads at 7 PM triggers the first round of processing while they are still shooting at 8 PM. This parallel operation is deliberate: the system does not wait for the full card to be uploaded before it starts working. As soon as a batch of images clears the upload buffer, they enter the processing queue.
For large events with multiple photographers, uploads from different cameras are merged into a single event pool. Duplicate detection runs automatically, if two photographers shoot the same moment from different angles, the system identifies the near-duplicates and retains both, flagged for later curation if needed.
Stage 2: Face detection pass
Every uploaded image enters face detection immediately. This stage does not identify who anyone is, it simply locates faces within each frame, extracts bounding box coordinates and assesses quality scores for each detected face region: sharpness, lighting, frontal angle and occlusion level.
Images with no detected faces (venue shots, food photography, stage wide-angles) are flagged and routed to the event's general album rather than any personalised gallery. Images with multiple faces are split into individual face crops, each entering the next stage of the pipeline independently.
Quality thresholds matter here. A face that is significantly blurred, turned more than 45 degrees, or less than a minimum pixel size is excluded from matching, not from the photo, but from the identity assignment process. The photo itself remains available; it simply will not be attributed to an individual guest unless a higher-confidence match exists elsewhere in the event.
Stage 3: Embedding generation
For each face that passes the quality threshold, the system generates a mathematical embedding, a 512-dimensional vector that encodes the unique geometry of that face. These embeddings are produced by an ArcFace model, a deep learning architecture trained specifically for face recognition accuracy. The embedding is not a photo; it is a compact numerical signature that can be compared against other embeddings at high speed.
Embedding generation is computationally the most intensive stage in the pipeline. At scale, processing 10,000 images with an average of 2.3 detected faces per image, this means generating embeddings for approximately 23,000 face instances per event. Modern GPU-accelerated infrastructure processes this at a rate that keeps the pipeline moving in near real time as new uploads arrive.
Pipeline benchmark: In a recent 10,000-photo event upload, face detection and embedding generation for the first 1,000 images completed in under 4 minutes. The full batch of 10,000 images, including detection, embedding, clustering and gallery assembly, finished within 47 minutes of the final photo upload. First personalised gallery notifications were sent at the 52-minute mark.
Stage 4: Cluster matching
Embeddings are grouped using a clustering algorithm (DBSCAN - Density-Based Spatial Clustering of Applications with Noise) that finds which face instances across thousands of photos belong to the same person, without requiring any prior identity information. Two face embeddings that are close enough in vector space, within a configurable distance threshold, are assigned to the same cluster, which becomes a candidate identity within the event.
When a guest registers for an event and submits a selfie, their own face embedding is computed and stored against their registration record. At match time, their registration embedding is compared against every cluster in the event's face pool. The cluster whose centroid is closest to the registration embedding and within the matching threshold, is assigned to that guest. All photos in that cluster become part of their personalised gallery.
Cluster matching is highly tolerant of lighting variation, minor pose changes and the natural differences between a registration selfie taken in a controlled setting and candid event photography. The threshold is tunable per event type, tighter for smaller intimate events where false positives are more disruptive, slightly more permissive for large outdoor events where lighting conditions vary significantly.
Stage 5: Gallery assembly
Once a guest's cluster is identified, their personalised gallery is assembled dynamically. The gallery includes every photo where that guest appears, sorted by timestamp. For events with multiple albums, keynote stage, networking reception, gala dinner, photos are automatically organised into the appropriate album context, so the guest's gallery reflects the event structure rather than a flat chronological dump.
Gallery assembly also applies event branding: the organiser's logo, accent colour and any sponsor co-branding set up in the event configuration are applied to the gallery header, download watermarks (if enabled) and the delivery email template. Guests receive a branded experience, not a generic photo folder.
Stage 6: Delivery trigger
The organiser controls when delivery is triggered. Most events use one of three configurations: automatic delivery (gallery links sent as soon as a guest's cluster is confirmed), batch delivery at a fixed time (e.g., 10 PM on the night of the event), or manual trigger (the organiser clicks Send when they are satisfied with coverage). All three modes are available per event.
The delivery email is personalised: it addresses the guest by name, previews two or three of their photos and includes a direct link to their gallery. No account creation is required. The link is secure and time-limited by default, though organisers can configure extended access periods for events where guests may want to return to their gallery weeks later.
Tip for organisers: For black-tie or gala events, set delivery to trigger at 10 PM - after dinner but while guests are still at the venue. Gallery open rates for same-evening delivery average 74% within the first two hours, compared to 31% for next-morning delivery. Guests share photos while the emotional memory of the evening is still live.
Why speed matters: the emotional window
The technical efficiency of the pipeline matters because of a psychological reality: event memories fade and the social impulse to share them fades even faster. Research on social media sharing behaviour consistently shows that content shared within 24 hours of an experience generates significantly higher engagement than the same content shared a week later. For events, the window is even tighter, the peak sharing impulse is in the hours immediately following the event, while guests are still in transit home and their networks are still primed by the event hashtag.
A photo delivered at 11 PM on the evening of an event will be shared, liked and commented on by an audience that was also at the event or has seen it discussed on social media that day. The same photo delivered four days later lands in a cold feed with no contextual momentum behind it.
This is why the pipeline is engineered for speed rather than merely accuracy. Accuracy at 99% delivered in three days is less valuable than accuracy at 97% delivered in 47 minutes, because the three-day version misses the window entirely for most guests.
SLA benchmarks
Eventiere operates to the following processing benchmarks for standard events:
- First photos processed: within 5 minutes of the first upload batch
- First personalised galleries available: within 45 minutes of the first upload for events up to 2,000 guests
- Full event processing complete: within 90 minutes of the final photo upload for events up to 15,000 photos
- Delivery notification sent: within 2 minutes of the organiser triggering delivery or the automatic trigger condition being met
For events where the photographer uploads in batches throughout the day, common for conferences and multi-session corporate events, guests who attended morning sessions can receive their gallery before the afternoon programme begins. The pipeline processes each batch as it arrives, so early arrivals to a gallery are not waiting for the last card to be processed.
See the pipeline in action for your next event
Book a demo and watch Eventiere process a live upload, from photographer card to guest inbox, in real time.
Book a free demo