The radiances are bias corrected before ingested into the 1DVAR algorithm. The purpose is to remove potential systematic differences between the measurements and the CRTM simulations that could lead to retrieval biases. The current radiometric bias correction uses a Histogram Adjustment Method, which computes the statistics of bias over oceanic and clear scenes and specifies the bias as function of channel and scan position for each instrument. This method performs well at characterizing the average global differences between measurements and retrievals. However, because the same scan-dependent biases are applied regardless of location or air mass characteristics, the local variations of systematic errors in forward model due to inaccurate assumptions of absorber effects and other parameterizations are not accounted for.
In this presentation, we discuss a new approach to the radiometric bias correction specification based on a Neural Network (NN). The basic idea is to use the NN to learn the bias structure from historical collocations of simulated and observed brightness temperatures, along with the estimated corresponding atmospheric and surface state. The NN model, once trained, can then be used in near real time for bias correction during the retrieval process. The inputs to NN include brightness temperature of the measurements, satellite viewing angle, latitude, along with other estimated geophysical parameters and features. A NN has been developed for ATMS measurements and preliminary results show that the NN can represent the observed bias structure leading to positive impacts on some retrieval variables. Further algorithm calibration and tuning using this approach is underway aimed at even further improvement in retrieval performance.