-How sure is the algorithm of this prediction/decision?
-Why should I trust the algorithm?
-How can I be sure the algorithm has been fair in the process leading up to its prediction/decision?
-Is the algorithm biased?
Answers to these questions can have profound consequences depending on the application (e.g., accidents and autonomous vehicles, life/death for a patient, incarceration/freedom for an accused). Consequently, as artificial intelligence (AI) is seeing increasing adoption in a variety of daily-life applications, addressing the underlying themes of the questions above has become a matter of urgent importance. In light of these issues, we seek to provide a focused venue for academic and industry researchers and practitioners to discuss research challenges and solutions associated with learning computer vision models with the overarching requirements of fairness, data efficiency, and trustworthiness. In particular, we ask:
-How can we train robust models under biased and scarce data?
-How can we detect bias or scarcity in data for a given objective function?
Topics for TCV 2022 include, but are not limited to:
-Application-specific designs for explainable computer vision, e.g., healthcare, autonomous driving, etc
-Algorithms and theories for learning computer vision models under bias and scarcity
-Performance characterization of vision algorithms and systems under bias and scarcity.
-Algorithms for secure and privacy-aware machine learning for computer vision
-Algorithms and theories for trustworthy computer vision models
-The role of adjacent fields of study (e.g, computational social science) in mitigating issues of bias and trust in computer vision
Important Dates
-Paper submission deadline: March 25, 2022 11.59pm Pacific Time
-Notification to authors: April 8, 2022 11.59pm Pacific Time
-Camera ready deadline: April 15, 2022 11.59pm Pacific Time
Submission Website