Safety and Security of Deep Learning, ONLINE, Apr 2021, apologies for possible multiple copies

 

Deep learning is profoundly reshaping the research directions of entire scientific communities across mathematics, computer science, and statistics, as

well as the physical, biological and medical sciences . Yet, despite their indisputable success, deep neural networks are known to be universally

unstable. That is, small changes in the input that are almost undetectable produce significant changes in the output. This happens in applications such

as image recognition and classification, speech and audio recognition, automatic diagnosis in medicine, image reconstruction and medical imaging as well

as inverse problems in general. This phenomenon is now very well documented and yields non-human-like behaviour of neural networks in the cases where

they replace humans, and unexpected and unreliable behaviour where they replace standard algorithms in the sciences.

 

The many examples produced over the last years demonstrate the intricacy of this complex problem and the questions of safety and security of deep

learning become crucial. Moreover, the ubiquitous phenomenon of instability combined with the lack of interpretability of deep neural networks makes the

reproducibility of scientific results based on deep learning at stake.

 

For these reasons, the development of mathematical foundations aimed at improving the safety and security of deep learning is of key importance.  The

goal of this workshop is to bring together experts from mathematics, computer science, and statistics in order to accelerate the exploration of

breakthroughs and of emerging mathematical ideas in this area.

 

This ICERM workshop is fully funded by a Simons Foundation Targeted Grant to Institutes.

Apply today! https://icerm.brown.edu/events/htw-21-ssdl/

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult