In this project we utilize the mechanics of explosively driven cloud rise, which are governed by the principles of buoyancy and viscosity, in order to analyze benchmark data created from the sampling of historic digitized scientific nuclear weapons test films. This research database is composed of dimensional data carefully obtained from analysis tools developed by the author. The collection methodology allows for useful conclusions to be drawn from historically neglected photographic evidence.
A data sampling graphical user interface is developed and 18 films from 6 nuclear detonations of Operation Teapot are sampled and analyzed. Data is compared with historical theodolite operation and film examination. Additionally comparisons are made to cloud rise codes utilizing a figure of merit system. Error from film aging, film scanning, data sampling, and analysis is propagated in quadrature and the effect is quantified.
From 1945 to 1962 the US government conducted 210 atmospheric nuclear weapon tests. Each test was designed to meet specific objectives and hundreds of experiments were conducted to gather invaluable data that led to the formulation of our understanding of nuclear weapon effects. Each test shot was filmed from a variety of cameras in multiple locations, viewing angles, and distances from the detonation. The resulting films can be divided into categories such as fireball, early cloud, late cloud, and military effects testing.
The magnitude of this endeavor was greater than that of the space program and if it was repeated today would cost more than the national debt. The remaining films from this project are priceless. Given the decomposition of organic film as it ages the data associated with these test will eventually become so degraded that it is no longer of use.
Since 2011 Lawrence Livermore National Laboratories have been digitizing these films in an effort to preserve and analyze nuclear weapon effects data. This presents the unique opportunity of a modern evaluation of these films using digital image processing techniques developed over the last 30 years in an environment of mass data storage and multiple core computing.