The Unet3+ model, a deep learning approach originally introduced for use in medical imaging, is particularly skillful at image segmentation tasks. By combining full scale skip connections with an encoder-decoder architecture, the model can capture both fine-grain detail and coarse-grain semantics simultaneously, allowing for more accurate image segmentation. By combining the capabilities of the Unet3+ model with a neighborhood loss function, Fractions Skill Score (FSS), we can quantify model success by predictions made both in and around the location of the original fire occurrence label.
The model is trained on fuel, weather, and topography observational inputs and labels representing fire occurrence. We source our observational fuel, topography, and weather data from the gridMET dataset, a daily, CONUS-wide, high-spatial resolution dataset of surface meteorological variables, including weather derived fuel variables. Our fire occurrence labels are sourced from the U.S. Department of Agriculture’s Fire Program Analysis fire-occurrence database (FPA-FOD), which contains spatial wildfire occurrence data for CONUS, combining data sourced from the reporting systems of federal, state, and local organizations.
By exploring the many aspects of the modeling process with the added context of model performance, we hope to aid in building understanding around the use of deep learning to predict fire occurrence in the CONUS. We aim to present a roadmap for comparing different machine learning modeling techniques to each other in this space, with the ultimate goal of drawing meaningful conclusions about the maturity of such applications to inform future research within and adjacent to the deep learning space.

