In the setting of computer vision, algorithmic searches often aim to identify an object inside large sets of images or videos. Towards reducing the often astronomical complexity of this search, one can use pruning to filter out sufficiently distinct objects, thus resulting in a pruning gain of an overall reduced search space.
Motivated by practical scenarios such as time-constrained human identification in biometric-based video surveillance systems, we analyze the stochastic behavior of time-restricted search pruning, over large and unstructured data sets which are furthermore random and varying, and where in addition, pruning itself is not fully reliable but is instead prone to errors. In this stochastic setting we explore the natural tradeoff that appears between pruning gain and reliability, and proceed to first provide average-case analysis of the problem and then, using large deviations and informational divergence techniques, to study the atypical gain-reliability behavior, giving insight on how often pruning might fail to substantially reduce the search space. The simplicity of the obtained expressions allows for rigorous and insightful as-
sessment of the pruning gain-reliability behavior, as well as for intuition into designing general object recognition systems.