A neural-network based 'virtual radar' retrieval has been trained and internally validated, using multifrequency/multipolarization passive microwave (TMI) brightness temperatures and texture parameters and lightning (LIS) observations, as inputs, and PR volumetric reflectivity as targets (outputs). By training the algorithms (essentially highly multivariate, nonlinear regressions) on a very large sample of high-quality co-located data from the center of the TRMM swath, 3D radar reflectivity and derived parameters (VIL, IWC, Echo Tops, etc) can be retrieved across the entire TMI swath, good to 8-9% over the dynamic range of parameters. As a step in the retrieval (and as an output of the process), each TMI multifrequency pixel (at 85 GHz resolution) is classified into one of 25 archetypal radar profile vertical structure "types", previously identified using cluster analysis (Boccippio et al, J. Climate, 2005). The dynamic range of retrieved vertical structure appears to have higher fidelity than the current (Version 6) experimental GPROF hydrometeor vertical structure retrievals. This is attributable to correct representation of the prior probabilities of vertical structure variability in the neural network training data, unlike the GPROF cloud-resolving model training dataset used in the V6 algorithms. The LIS lightning inputs are supplementary inputs, and a separate offline neural network has been trained to impute (predict) LIS lightning from passive-microwave-only data. The virtual radar retrieval is thus, in principle, extensible to Aqua/AMSR-E and NPOESS/CMIS passive microwave instruments.
The virtual radar approach yields a threefold increase in effective sampling from the mission, albeit of lower-quality "retrieved" data, reducing the variance of local estimates by one third (or the standard deviation by ~0.57). In this talk, the variance reduction is leveraged to more finely resolve global diurnal variability in both space and time (local hour).