Nighttime Pedestrian Visibility Reconstruction
A pedestrian strike was reconstructed in full CG to demonstrate nighttime visibility using lab-tested headlights, aerial LiDAR, and expert-calibrated camera settings. The resulting animation, admitted into federal court, provided a scientifically accurate visualization of what the driver could have seen at the time of the crash.
The incident occurred on a dark, cold, and wet night where a man was crossing the street away from a designated crosswalk. While technically not in a crosswalk, the nearest one was impractically far, making it reasonable for a pedestrian to cross mid-block. The driver claimed he didn’t see the pedestrian because he was checking his side mirror to merge, but we demonstrated that had he been looking forward, he would have seen the pedestrian in time to avoid the crash. Our strategy centered on establishing what was visible within the driver’s available field of view—without assuming where he was looking—using accurate and reliable data to recreate what could have been seen.
We visited the scene during the day to scan existing conditions and capture photographic documentation, then returned at night with traffic control in place to eliminate outside light sources. Working with an accident reconstructionist, we positioned an exemplar truck at one-second intervals along the path of travel leading to the impact. A surrogate pedestrian was also positioned at appropriate locations along his path. A DSLR camera was mounted at the driver’s eye position inside the cab, behind the windshield, to accurately replicate the driver's vantage point. Exposure was set using a contrast sensitivity chart to ensure the data captured matched human visual perception. We used a large calibrated monitor to display the live camera feed. A human factors expert viewed both the monitor and the real scene simultaneously, comparing a specialized contrast sensitivity chart placed at the surrogate pedestrian’s location, for each scene station. I adjusted the camera settings until the number of visible grayscale targets on the screen matched what the expert could perceive with his own eyes through the windshield. Only after confirming this match did we replace the chart with the surrogate and record the image, which became the reference for the animated visualization of the vehicle in motion.
We also tested the truck’s headlights in a certified laboratory to replicate their beam spread and intensity in the animation. Additionally, using aerial LiDAR and dash cam video from the time of the crash, we reconstructed details that had changed between the incident and our inspection, including demolished, new buildings, and roadside features. The end result was a visual reconstruction grounded in physical data and expert review, offering a scientifically accurate representation of what was visible to the driver that night. This presentation will emphasize the importance of pairing high-fidelity visuals with credible methodology, giving jurors an unbiased and reliable account of the events.
PHOTO
RENDERING









