A three-day technology expo, eight photographers, an attendance of 24,000. Across the run, the photography team generates somewhere between 45,000 and 60,000 frames. The organiser has committed to a same-week gallery deadline. This is roughly the volume at which an unplanned workflow collapses. Frames go missing, photographers overwrite each other's cards, the editor has not seen the colour space brief, and on Tuesday the organiser is told the gallery will be late.

The events that do not collapse at this scale are not running heroic workflows. They are running planned ones. The plan covers five specific areas: on-site storage, photographer hand-off across shifts, real-time processing thresholds, quality control sampling and progressive delivery. None is glamorous. All have to be agreed before the event begins.

On-site storage and live backup

At 50,000 frames over three days, raw storage runs to roughly 1.5 to 2 TB depending on capture format. Each photographer should be running dual-card capture (SD plus CFexpress, or twin SD on mirrorless bodies) so a card failure does not lose frames. That covers the body. It does not cover loss in transit between photographer and editor.

The pattern that works is a 30-minute mirror cycle: every half hour, a runner collects the secondary card from each photographer, mirrors it to a venue NAS, and returns the card. The primary card stays with the photographer. The NAS replicates to cloud storage in the background as bandwidth allows. The full sequence means that no frame is more than 30 minutes from being on at least three independent media: in-camera card, NAS, cloud. Card failure during the half hour window costs at most one cohort's worth of frames, not a day's.

The NAS does not need to be expensive. A Synology or QNAP unit with two 4 TB drives in RAID 1 and a 10 GbE connection to the local switch will handle the throughput. The dedicated runner is the cheaper part of the bill.

Photographer hand-off and shared metadata

Multi-day expos involve shift changes. The photographer covering the morning keynote is not the photographer covering the evening networking session. Without a hand-off briefing, the new shift has no idea what has already been shot, what the sponsor logo placement is, which speaker has not been captured yet, or what the white-balance baseline is.

A useful brief document is one page per shift, covering: shot list status (what is done, what is outstanding), sponsor obligations (which logos need a clean frame at the next session), known issues from the previous shift, and the metadata schema. The schema matters disproportionately. If photographer A is tagging speaker portraits with "session-keynote-mon" and photographer B is tagging them with "mon_keynote_speaker", the editor cannot batch the gallery. A two-line agreed schema, set out before the event, removes hours of downstream cleanup.

Real-time clustering versus end-of-day batches

The choice between processing frames in real time and processing them in nightly batches is not arbitrary. Real-time clustering is appropriate when the event has time-sensitive delivery commitments (gallery up by midnight day one, day-two photos by 8 AM, etc.). Batch processing is appropriate when the event delivers a single combined gallery at the end.

For multi-day events, the real-time approach is almost always the right answer. The reason is recovery: if something goes wrong with day-one processing, you discover it on the morning of day two when you can still correct, not on Friday morning when the gallery is supposed to be live. Catching a colour-profile mismatch or a face-matching threshold error during the event is cheap. Catching it afterwards is not.

Quality control at scale

You cannot human-review 50,000 frames. Quality control at this scale is necessarily a sampling problem augmented by automated triage. The triage layer should reject obviously unusable frames before human review: out-of-focus (Laplacian variance below a configured threshold), severely under-exposed, frames with zero detected faces in a faces-required album, near-duplicates of frames already accepted. A reasonable triage pass will remove 15 to 25% of raw captures before human review, leaving a much smaller pile.

Human sampling should target 2 to 5% of remaining frames stratified by photographer and by session. A reviewer looks at sampled frames per photographer per session and flags systemic issues: a photographer whose white balance has drifted, a session where the strobe failed and frames are uniformly dark. The reviewer is not trying to approve every frame. They are trying to identify cohorts that need a second pass.

Case Study - Tech Expo, India

Bengaluru Tech Expo - 3 days, 24,000 attendees, 52,000 frames

A three-day technology expo with a mixed delegate and trade-public audience. The previous edition had used a single end-of-event gallery release four days after the show. Engagement metrics were poor: opens by day five had dropped to 11%, social shares were negligible, and a follow-up survey identified delayed photos as the most-cited complaint behind venue catering.

The 2025 edition used 30-minute card mirroring to a venue NAS, per-shift photographer briefings, real-time clustering across all three days, automated triage that removed 19% of frames before human review, and progressive delivery: day one photos went live by 11 PM on day one, day two by 9 AM on day three, day three by midnight on day three.

By the morning after the closing day, 87% of registered attendees had opened their personal gallery. Aggregate share count was 4.2x higher than the previous year. The organiser cited the photo experience as the single biggest positive shift in attendee NPS year-on-year.

87%gallery open rate
4.2xYoY share increase
19%frames auto-triaged

Progressive delivery beats waiting for completeness

The strongest instinct of organisers running their first high-volume event is to wait until everything is processed before publishing. This is the wrong instinct. Day-one attendees have travelled home by day two morning. They want to see their photos on the train. A partial gallery of day-one frames is dramatically more valuable to them than a complete gallery of all three days delivered on Friday.

Progressive delivery requires the gallery infrastructure to support incremental publishing: photos added to the live album as they finish processing, attendees notified of newly-available photos through a single end-of-day digest rather than per-photo. The notification frequency matters. A guest who receives one email per day with "47 new photos of you have been added" engages with each one. A guest who receives twelve emails over three days stops opening.

The no-perfect-set rule: Partial delivery delivered on schedule outperforms perfect delivery delivered late. By a large margin. At the volumes discussed here, you will never have a perfectly culled, perfectly tagged, perfectly white-balanced complete set. You will have a 95%-finished set on schedule or a 99%-finished set a week late. The 95% set on schedule wins on every measurable engagement metric.

What to standardise before the next event

If you run high-volume events repeatedly, the operational gains come from standardising the parts that do not need to be reinvented. A reusable photographer brief template, a standing card-mirror procedure, a default triage threshold set, an agreed metadata schema. Each of these turns the next event from a project into an execution. The first event after standardisation costs more than usual. Every subsequent event costs less.

Running a multi-day event in the next 90 days?

We can run through your shoot list, hand-off plan and delivery schedule and tell you what we would change. No commitment.

Book a free demo