Speaker: Zachary Labe, Princeton University and NOAA/GFDL
Abstract: The popularity of machine learning methods, such as neural networks, continues to rapidly grow. The interest in these tools also coincides with a growing influx of big data, high performance computing capabilities, and the need for greater efficiency in solving a range of tasks. However, there is also some hesitancy for considering the use of machine learning algorithms due to concerns about their reliability, reproducibility, and interpretability. In this seminar, I will show examples of how relatively simple classification problems can be combined with explainable artificial intelligence to improve our understanding of climate prediction and projection. Overall, we find that explainable neural networks are highly skillful in identifying patterns of forced signals within climate model large ensembles and observations. This is especially useful for disentangling regional responses to anthropogenic climate change versus natural variability, such as in detection and attribution applications. This same explainability framework can be easily adapted for a wide variety of problems in the environmental sciences.
More information here.