2022 CVPR workshop on Fair, Data-efficient, and Trusted Computer Vision

Call for Papers: CVPR 2022 Workshop on Fair, Data-efficient, and Trusted Computer Vision
As the computer vision research community makes rapid progress in producing algorithms with human-level performance, it is extremely critical that we take a step back and assess, and consequently promise to the consumer world, what this objective performance reported in academic literature means in the context of real-world systems and applications. As a concrete example, it is one thing for a social media organization to use an algorithm to automatically identify a person of interest in pictures uploaded to its platform. On the other hand, the use of algorithms in making life-changing decisions in areas such as healthcare (e.g., should a certain treatment be administered?) is a totally different ballgame. At the very least, the following questions will be asked of the algorithm/system by the user:
-Why is the algorithm predicting X?
-How sure is the algorithm of this prediction/decision?
-Why should I trust the algorithm?
-How can I be sure the algorithm has been fair in the process leading up to its prediction/decision?
-Is the algorithm biased?

Answers to these questions can have profound consequences depending on the application (e.g., accidents and autonomous vehicles, life/death for a patient, incarceration/freedom for an accused). Consequently, as artificial intelligence (AI) is seeing increasing adoption in a variety of daily-life applications, addressing the underlying themes of the questions above has become a matter of urgent importance. In light of these issues, we seek to provide a focused venue for academic and industry researchers and practitioners to discuss research challenges and solutions associated with learning computer vision models with the overarching requirements of fairness, data efficiency, and trustworthiness. In particular, we ask:

-How can we make our algorithms more explainable and trustworthy than they currently are?
-How can we make our algorithms more fair and less biased than they currently are?
-How can we train robust models under biased and scarce data?
-How can we detect bias or scarcity in data for a given objective function?

Topics for TCV 2022 include, but are not limited to:

-Algorithms and theories for explainable and interpretable computer vision models
-Application-specific designs for explainable computer vision, e.g., healthcare, autonomous driving, etc
-Algorithms and theories for learning computer vision models under bias and scarcity
-Performance characterization of vision algorithms and systems under bias and scarcity.
-Algorithms for secure and privacy-aware machine learning for computer vision
-Algorithms and theories for trustworthy computer vision models
-The role of adjacent fields of study (e.g, computational social science) in mitigating issues of bias and trust in computer vision

Important Dates

-Paper submission deadline: March 25, 2022 11.59pm Pacific Time
-Notification to authors: April 8, 2022 11.59pm Pacific Time
-Camera ready deadline: April 15, 2022 11.59pm Pacific Time

Workshop website 
https://fadetrcv.github.io/2022/ 

Submission Website 

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult