19th IEEE eScience Conference (eScience 2023): Fourth Call for Papers

*** Fourth Call for Papers ***

19th IEEE eScience Conference (eScience 2023)

October 9-13, 2023, St. Raphael Resort, Limassol, Cyprus

eScience 2023 provides an interdisciplinary forum for researchers, developers, and users of
eScience applications and enabling IT technologies. Its objective is to promote and encourage
all aspects of eScience and its associated technologies, applications, algorithms, and tools,
with a strong focus on practical solutions and open challenges. The conference welcomes
conceptualization, implementation, and experience contributions enabling and driving
innovation in data- and compute-intensive research across all disciplines, from the physical
and biological sciences to the social sciences, arts, and humanities; encompassing artificial
intelligence and machine learning methods; and targeting a broad spectrum of architectures,
including HPC, Cloud, and IoT.

The overarching theme of the eScience 2023 conference is “open eScience”. This year, the
conference is promoting four additional key topics:
• Computational Science for sustainable development
• FAIR
• Research Infrastructures for eScience
• Continuum Computing: Convergence between Cloud Computing and the Internet of Things
(IoT)

The conference is soliciting two types of contributions:
• Full papers (10 pages) presenting previously unpublished research achievements or
eScience experiences and solutions
• Posters (2 pages) showcasing early-stage results and innovations

Submitted papers should use the IEEE 8.5×11 manuscript guidelines: double-column text
using single-spaced 10-point font on 8.5×11-inch pages. Templates are available from

Submissions should be made via the Easy Chair system using the submission link:

All submissions will be single-blind peer reviewed. Selected full papers will receive a slot for
an oral presentation. Accepted posters will be presented during a poster reception. Accepted
full papers and poster papers will be published in the conference proceedings. Rejected full
papers can be re-submitted for a poster presentation. At least one author of each accepted
paper or poster must register as an author at the full registration rate. Each author registration
can be applied to only one accepted submission.
AWARDS

eScience 2023 will host the following awards, which will be announced at the conference.
• Best Paper Award
• Best Student Paper Award
• Best Poster Award
• Best Student Poster Award
• Outstanding Early Career Contribution – this award is associated with poster submissions
and short presentations of attendees in their early career phase (i.e., postdoctoral researchers
and junior scientists).
KEY DATES

• Paper Submissions Due: Friday, May 26, 2023 (AoE)
• Notification of Paper Acceptance: Friday, June 30, 2023
• Poster Submissions due: Friday, July 7, 2023 (AoE)
• Poster Acceptance Notification: Monday, July 24, 2023
• All Camera-ready Submissions due: Monday, August 14, 2023
• Author Registration Deadline: Monday, August 14, 2023
ORGANISATION

General Chair
• George Angelos Papadopoulos, University of Cyprus, Cyprus

Technical Program Co-Chairs
• Rafael Ferreira da Silva, Oak Ridge National Laboratory, USA
• Rosa Filgueira, University of St Andrews, UK

Organisation Committee

Steering Committee

BigDat 2023 Summer: early registration April 28

7th INTERNATIONAL SCHOOL ON BIG DATA

BigDat 2023 Summer

Las Palmas de Gran Canaria, Spain

July 17-21, 2023

https://bigdat.irdta.eu/2023su

***********************************************

Co-organized by:

University of Las Palmas de Gran Canaria

Institute for Research Development, Training and Advice – IRDTA
Brussels/London

***********************************************

Early registration: April 28, 2023

***********************************************

FRAMEWORK:

BigDat 2023 Summer is part of a multi-event called Deep&Big 2023 consisting also of DeepLearn 2023 Summer. BigDat 2023 Summer participants will have the opportunity to attend lectures in the program of DeepLearn 2023 Summer as well if they are interested.

SCOPE:

BigDat 2023 Summer will be a research training event with a global scope aiming at updating participants on the most recent advances in the critical and fast developing area of big data. Previous events were held in Tarragona, Bilbao, Bari, Timisoara, Cambridge and Ancona.

Big data is a broad field covering a large spectrum of current exciting research and industrial innovation with an extraordinary potential for a huge impact on scientific discoveries, health, engineering, business models, and society itself. Renowned academics and industry pioneers will lecture and share their views with the audience.

Most big data subareas will be displayed, namely foundations, infrastructure, management, search and mining, analytics, security and privacy, as well as applications to biology and medicine, business, finance, transportation, online social networks, etc. Major challenges of analytics, management and storage of big data will be identified through 14 four-hour and a half courses and 2 keynote lectures, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Face to face interaction and networking will be main ingredients of the event. It will be also possible to fully participate in vivo remotely.

An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and employment profiles.

ADDRESSED TO:

Graduate students, postgraduate students and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees, so people less or more advanced in their career will be welcome as well. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, BigDat 2023 Summer is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen to and discuss with major researchers, industry leaders and innovators.

VENUE:

BigDat 2023 Summer will take place in Las Palmas de Gran Canaria, on the Atlantic Ocean, with a mild climate throughout the year, sandy beaches and a renowned carnival. The venue will be:

Institución Ferial de Canarias
Avenida de la Feria, 1
35012 Las Palmas de Gran Canaria

https://www.infecar.es/

STRUCTURE:

2 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another.

Also, if interested, participants will be able to attend courses developed in DeepLearn 2023 Summer, which will be held in parallel and at the same venue.

Full live online participation will be possible. The organizers highlight, however, the importance of face to face interaction and networking in this kind of research training event.

KEYNOTE SPEAKERS:

Valerie Daggett (University of Washington), Dynameomics: From Atomistic Simulations of All Protein Folds to the Discovery of a New Protein Structure to the Design of a Diagnostic Test for Alzheimer’s Disease

Sander Klous (University of Amsterdam), How to Audit an Analysis on a Federative Data Exchange

PROFESSORS AND COURSES:

Paolo Addesso (University of Salerno), [introductory/intermediate] Data Fusion for Remotely Sensed Data

Marcelo Bertalmío (Spanish National Research Council), [introductory] The Standard Model of Vision and Its Limitations: Implications for Imaging, Vision Science and Artificial Neural Networks

Gianluca Bontempi (Université Libre de Bruxelles), [intermediate/advanced] Big Data Analytics in Fraud Detection and Churn Prevention: from Prediction to Causal Inference

Altan Çakir (Istanbul Technical University), [introductory/intermediate] Introduction to Big Data with Apache Spark

Ian Fisk (Flatiron Institute), [introductory] Setting Up a Facility for Data Intensive Science Analysis

Ravi Kumar (Google), [intermediate/advanced] Differential Privacy

Wladek Minor (University of Virginia), [introductory/advanced] Big Data in Biomedical Sciences

José M.F. Moura (Carnegie Mellon University), [introductory/intermediate] Graph Signal Processing and Geometric Learning

Panos Pardalos (University of Florida), [intermediate/advanced] Data Analytics for Massive Networks

Ramesh Sharda (Oklahoma State University), [introductory/intermediate] Network-Based Health Analytics

Steven Skiena (Stony Brook University), [introductory/intermediate] Word and Graph Embeddings for Machine Learning

Mayte Suarez-Farinas (Icahn School of Medicine at Mount Sinai), [intermediate] Meta-Analysis Methods for High-Dimensional Data

Ana Trisovic (Harvard University), [introductory/advanced] Principles, Statistical and Computational Tools for Reproducible Data Science

Sebastián Ventura (University of Córdoba), [intermediate] Supervised Descriptive Pattern Mining

OPEN SESSION:

An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing the title, authors, and summary of the research to david@irdta.eu by July 9, 2023.

INDUSTRIAL SESSION:

A session will be devoted to 10-minute demonstrations of practical applications of big data in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. People in charge of the demonstration must register for the event. Expressions of interest have to be submitted to david@irdta.eu by July 9, 2023.

EMPLOYER SESSION:

Organizations searching for personnel well skilled in big data will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the organization and the profiles looked for to be circulated among the participants prior to the event. People in charge of the search must register for the event. Expressions of interest have to be submitted to david@irdta.eu by July 9, 2023.

ORGANIZING COMMITTEE:

Aridane González González (Las Palmas de Gran Canaria)
Marisol Izquierdo (Las Palmas de Gran Canaria, local chair)
Carlos Martín-Vide (Tarragona, program chair)
Sara Morales (Brussels)
David Silva (London, organization chair)

REGISTRATION:

It has to be done at

https://bigdat.irdta.eu/2023su/registration/

The selection of 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish as well as eventually courses in DeepLearn 2023 Summer.

Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration tool disabled when the capacity of the venue will have got exhausted. It is highly recommended to register prior to the event.

FEES:

Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline.

The fees for on site and for online participation are the same.

ACCOMMODATION:

Accommodation suggestions are available at

https://bigdat.irdta.eu/2023su/accommodation/

CERTIFICATE:

A certificate of successful participation in the event will be delivered indicating the number of hours of lectures.

QUESTIONS AND FURTHER INFORMATION:

david@irdta.eu

ACKNOWLEDGMENTS:

Cabildo de Gran Canaria

Universidad de Las Palmas de Gran Canaria – Fundación Parque Científico Tecnológico

Universitat Rovira i Virgili

Institute for Research Development, Training and Advice – IRDTA, Brussels/London

Free Webinar on “Remote Photoplethysmography Based 3D Facial Mask Presentation Attack Detection” by Prof. Pong Chi Yuen

The IEEE Biometrics Council invites participants to the upcoming (free)
webinar by Prof. Pong Chi Yuen on “Remote Photoplethysmography Based 3D
Facial Mask Presentation Attack Detection”. Detail on the webinar are
given below:

Title: Remote Photoplethysmography Based 3D Facial Mask Presentation
Attack Detection
Speaker: Prof. Pong Chi Yuen, Hong Kong Baptist University, Hong Kong
When: May 24, at 10am Hong Kong time (4 am CEST, i.e., May 23 at 9 pm
CDT, 10pm EDT)
Where: Online (Zoom)
Registration: (free, but required):
https://us06web.zoom.us/webinar/register/WN_CMxMWw18QtKgo9zqJFujHQ

For details on the webinar see:
https://ieee-biometrics.org/index.php/activities/webinars

*****
Talk Summary: While face recognition technology has been extensive
deployed in many practical applications, research problems related to
face presentation attack detection (PAD) are still unsolved. The popular
face presentation attacks include images, videos and 3D masks. Among
these, 3D mask presentation attacks are the most challenging because
super realistic high-quality 3D facial masks can be created at a
reasonable cost. One of the promising approaches in addressing 3D mask
presentation attacks is remote photoplethysmography (rPPG). In this
talk, I will share my research journey on face presentation attack
detection from 2016 to present. The talk will first discuss the basic
principle of using rPPG technology for face PAD, and discuss our
proposed rPPG-based face PAD methods. The reliability of rPPG based face
PAD methods relies on the quality of the estimated rPPG signals. I will
also discuss the robustness of the rPPG estimation algorithms. Some
comments and suggestions on future research will also be given at the
end of the talk.

About the speaker: Prof. Pong Chi Yuen is a Chair Professor in Computer
Science and Associate Dean at Hong Kong Baptist University. He received
his Ph.D. degree in Electrical and Electronic Engineering in 1993 from
The University of Hong Kong. Dr. Yuen has been involved in various
conferences and served as the director of Croucher ASI on biometric
authentication and biometric security and privacy. He has received the
Outstanding Editorial Board Service Award in 2018 and served as Vice
President of the IEEE Biometrics Council. Dr. Yuen has received the
first and second prize Natural Science Awards from Guangdong Province
and the Ministry of Education, China, respectively. He is a Fellow of
IAPR and currently serves as Senior Area Editor of IEEE Transactions on
Information Forensics and Security and Associate Editor of IEEE
Transactions on Biometrics, Behaviour, and Identity Science.

UMAP ’23: 31st ACM Conference on User Modeling, Adaptation and Personalization: Last Call for Late-Breaking Results and Demos

*** Last Call for Late-Breaking Results and Demos ***

UMAP ’23: 31st ACM Conference on User Modeling, Adaptation and
Personalization

June 26 – 29, 2023, St. Raphael Resort, Limassol, Cyprus

Submissions due: April 24, 2023
IMPORTANT DATES

Submission of papers: April 24, 2023
Notification of acceptance: May 10, 2023
Camera-ready versions of accepted papers: May 18, 2023
Conference: June 26-29, 2023 

Note: The submission times are 11:59 pm AoE time (Anywhere on Earth)
SUBMISSION FORMATS

Demonstrations
Max. 5 pages + max. 1 additional page for references;
(Optional) video or external material demonstrating the system;
Publication in ACM UMAP 2023 Adjunct Proceedings;
Presentation as a demo + poster at the conference.

Description: Demonstrations will showcase research prototypes and
commercially available products in a dedicated session. Demo submissions must
be based on an implemented and tested system that pursues one or more
innovative ideas in the interest areas of the conference.

Demonstrations are an excellent and exciting way to showcase implementations
and get valuable feedback from the community. Each demo submission must
make clear which aspects of the system will be demonstrated, and how these will
be demonstrated on-site as well as online.

To better identify the value of demos, we also encourage authors to submit a
pointer to a screencast (max. 5 minutes on Vimeo or YouTube) or any
external material related to the demo (e.g., shared code on GitHub).

Descriptions of demonstrations should have a length of max. 5 pages + 1 page
of references in the new ACM single-column style. On an extra page (not to
be published), submissions should include a specification of the
technical requirements for demonstrating the system at UMAP 2023. 

Late-Breaking Results
Max. 7 pages + max. 2 additional pages for references;
(Required) unpublished page with a list of questions the authors aim to get
feedback on;
Publication in ACM UMAP 2023 Adjunct Proceedings;
Presentation as a poster at the conference.

Description: Late-Breaking Results (LBR) are research-in-progress that must
contain original and unpublished accounts of innovative research ideas,
preliminary results, industry showcases, and system prototypes, addressing both
the theory and practice of User Modeling, Adaptation, and Personalization. In
addition, papers introducing recently started research projects or summarizing
project results are welcome as well.

We encourage researchers and practitioners to submit late-breaking work as
it provides a unique opportunity for sharing valuable ideas, eliciting useful
feedback on early-stage work, and fostering discussions and collaborations
among colleagues.

Late-Breaking Results papers have a length of up to 7 pages + 2 pages of
references in the new ACM single-column style and will be presented to
the conference as posters. On an extra page (not to be published),
submissions should include a list of questions that the authors aim to get feedback
on during the poster session at UMAP 2023.
SUBMISSION AND REVIEW PROCESS

Papers will be reviewed single-blind and do not need to be anonymised before
submission

Papers must be formatted according to the new workflow for ACM publications. The
templates and instructions are available here:

Authors should submit their papers as single-column. The templates are
available here (we strongly recommend the usage of LaTeX for the
camera-ready papers to minimize the extent of reformatting):

LaTeX (use \documentclass[manuscript,review]{acmart} in the sample-
authordraft.tex file for single-column): 
Overleaf (use \documentclass[manuscript,review]{acmart} for single-column):
MS Word:

Note: Accepted papers will require further revision to meet the requirements and
page limits of the camera-ready format required by ACM. Instructions for
the preparation of the camera-ready versions of the papers will be provided
after acceptance.

The ACM Code of Ethics gives the UMAP program committee the right to (desk-)
reject papers that perpetuate harmful stereotypes, employ unethical
research practices, or uncritically present outcomes/implications that
clearly disadvantage minority communities. 

Submit your papers in PDF format via EasyChair for ACM UMAP 2023 Demos and
(choose “New Submission” and make sure to select “UMAP'23 – LBR and
Demos” track).

The review process will be single-blind, i.e., authors’ names should be included in
the papers. Submissions will be reviewed by at least two independent reviewers.
They will be assessed based on their originality and novelty, potential contribution
to the research field, potential impact in particular use cases, and the usefulness
of presented experiences, as well as their overall readability.

Papers that exceed the page limits or do not adhere to the formatting guidelines
will be returned without review.

UMAP has a *no dual submission* policy, which is why full paper submissions should
not be currently under review at another publication venue. Further, UMAP operates
under the ACM Conference Code of Conduct
as well as the ACM Publication Policies and Procedures
PUBLICATION AND PRESENTATION

Accepted Demo and Late-Breaking Results papers will be published in the ACM UMAP
2023 Adjunct Proceedings in the ACM Digital Library. Papers will be accessible from
the UMAP 2023 website through ACM OpenToc Service for one year after publication
in the ACM Digital Library. All categories will be presented at the poster reception of
the conference, in the form of a poster and/or a software demonstration following
poster format. This form of presentation will provide presenters with an opportunity
to obtain direct feedback about their work from a wide audience during the
conference. 

To be included in the Proceedings, at least one author of each accepted paper
must register for the conference and present the paper there.
LATE-BREAKING RESULTS AND DEMO CHAIRS

Ludovico Boratto, University of Cagliari, Italy
Alisa Rieger, Delft University of Technology, Netherlands
Shaghayegh (Sherry) Sahebi, University at Albany – SUNY, USA
Contact: umap2023-lbr@um.org

Call for Participants: IJCB 2023 Competition: 8th Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023)

********************************************************
8th Sclera Segmentation and Recognition  Benchmarking Competition (SSRBC
2023)

Held in conjunction with IEEE IJCB 2023
https://ijcb2023.ieee-biometrics.org/

Important dates: Registration is already open
SSRBC 2023 Website:
https://sites.google.com/hyderabad.bits-pilani.ac.in/ssrbc2023/home
********************************************************

Sclera biometrics have gained significant popularity among emerging
ocular traits in the last few years. In order to evaluate the potential
of this trait, a considerable amount of research has been presented in
the literature, both employing the sclera individually and in
combination with the iris. In spite of those initiatives, sclera
biometrics need to be studied more extensively to ascertain their
usefulness. Moreover, the sclera segmentation task still requires a
significant amount of attention due to challenges associated with the
performance of existing techniques while sclera recognition is performed
in cross-sensor and resolution scenarios. In order to investigate these
challenges, document recent development and attract the
attention/interest of researchers we are planning to host the next
Sclera Segmentation and Recognition Benchmarking Competition SSRBC 2023.
SSRBC 2023 will be the 7 th in the series of sclera (segmentation and
recognition) benchmarking competitions following SSBC 2015, SSRBC 2016,
SSERBC 2017, SSBC 2018, SSBC 2019 and SSBC 2020 held in conjunction with
BTAS 2015, ICB 2016, IJCB 2017, ICB 2018, 19 and 20, respectively. Due
to the overwhelming success of SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC
2018, 2019 and IJCB 2020, we plan to organize this proposed competition
to benchmark sclera segmentation and recognition jointly with both
cross-sensor and low and high-resolution images.

How to participate?

Registration for the competition can be done by email. If you would like
to register and receive the training dataset, please send an email to
abhijit.das@hyderabad.bits-pilani.ac.in with the subject line as “SSRBC
2023 registration” with the following information:

Name, Affiliation, Email, Phone number, CV , Mailing Address and signed
version of the following form .

Organizers :

Dr. Abhijit Das, BITS Pilani, Hyderabad, India
(abhijit.das@hyderabad.bits-pilani.ac.in)

Dr. Aritra Mukherjee, BITS Pilani, , Hyderabad, India
(a.mukherjee@hyderabad.bits-pilani.ac.in)

Prof. Umapada Pal,  Indian Statistical Institute, Kolkata, India
(umapada@isical.ac.in )

Prof. Peter Peer, University of Ljubljana, Ljubljana, Slovenija
(peter.peer @fri.uni-lj.si)

Assoc. Prof. Vitomir Štruc , University of Ljubljana, Ljubljana,
Slovenija (vitomir.struc @fe.uni-lj.si)

Execution

Description of the dataset(s) used for the competition and the available
annotations

The competition aims to benchmark the sclera segmentation and
recognition tasks with a dataset containing both low and high-resolution
images. Three different datasets will be employed for the competition,
where two were acquired with a DSLR camera and one by a mobile camera.

The first dataset, i.e, the multi-angle sclera dataset (MASD), consists
of 2624 RGB images taken from 82 identities. Images were collected from
both the eyes of each individual, so there are 164 different eyes in
total in the dataset. For each individual image, four gaze directions
(looking straight, left, right and up) were captured and for each
direction 4 images were taken. The subjects from the database are both
male and female and with different eye colors, few of them are wearing
contact lenses and images were taken at different times of the day. The
database contains images with blinking eyes, closed eyes and blurred
eyes. High-resolution images stored in JPEG format are provided in the
database (7500 x 5000 dimensions). A NIKON D 800 camera and 28300 lenses
were used for image capturing. A ground truth or manual sclera
segmentation of this dataset is also available. For development
purposes, a subset of the database, both eye images and ground truth (1
image for each angle/gaze of the first 30 subjects, i.e. 120 images in
total) will be provided to the participants.

The second dataset, the Mobile sclera dataset (MSD), consists of 500 RGB
images from both eyes of 25 individuals (in other words 50 different
eyes). For each eye, 10 images were captured. The database contains
blurred images and images with blinking eyes. The individuals comprise
both males and females (12 males and 13 females), of different ages and
different skin colors, 2 of them were wearing contact lenses and the
images were taken at different times of the day. Variation in image
quality (blur, lighting condition etc.) and different acquisition
conditions was included intentionally in the database to investigate the
performance of the framework in non-ideal scenarios. High-resolution
images (3264 × 2448) of 96 dpi are included in the database. All the
images are in JPEG format. The images were captured using a mobile
camera with an 8-megapixel rear camera.

The third dataset, SBVPI, consists of 1858 RGB images of 110 eyes (i.e.,
55 subjects) captured with a DSLR camera (specifically, a Canon EOS 60D
with macro lenses). All images were manually cropped to extract the
desired ROI while maintaining their aspect ratio, then rescaled to 3000
× 1700 pixels to maintain a consistent image size across the entire
dataset. Images in the dataset were captured at the highest resolution
and quality settings available in the camera and in a laboratory
environment. The dataset contains images taken under 4 different gaze
directions, with a minimum of 4 images per direction for each subject.
The appearance variability in SBVPI is due to identity, eye color,
gender, and age. Manually generated markups of the sclera and periocular
regions are present for all images. SBVPI is publicly available for
research purposes.

Details on the experimental protocol and result generation/submission
procedure,

The competition will address two problems of relevance to IJCB 2023,
sclera segmentation and recognition, and will be organized around three
tasks:

● Segmentation task: for the segmentation task, participants will have
to learn segmentation models on the MASD datasets and then test them on
the MSD and SBVPI datasets. Complete algorithms will have to be
submitted for scoring. The final performance evaluation will be
conducted by the organizers.

● Recognition task: for the recognition task, the participants will be
asked to develop recognition models on the MASD datasets and then submit
the trained models for scoring to the organizers. The performance
evaluation will be conducted on the sequestered MSD and SBVPI dataset.
In this case, the manually generated (ground truth) segmentation mask
will be used to get the ROI before subjecting the images to the
recognition/feature extraction models..

● Joint segmentation and Recognition task: for the joint
segmentation-recognition task, the participants will be asked to develop
segmentation as well as recognition models on the MASD datasets and then
submit the trained models for scoring to the organizers. The performance
evaluation will be conducted on the sequestered MSD and SBVPI dataset.
In this case, the segmentation masks generated by the models of the
participants will be used to extract the ROI. To ensure the models are
only trained on the vasculature of the sclera, the segmentation masks
generated by the segmentation models will be used to remove all parts of
the images that do not belong to the sclera prior to subjecting images
to the recognition model/feature extractor.

Description of the evaluation criteria (performance metrics) and
available baseline implementations/code (e.g., a starter kit).

● Segmentation task: The evaluation measures will be precision and
recall (recall will consider the prior measure for ranking the
algorithms). The ground truth of the manually segmented sclera region in
an eye image is constructed, which will be used as a baseline.

● Recognition task: For the recognition task, we will consider
verification experiments and report the Area Under the ROC Curve (AUC)
as our main competition metric. For the summary paper, other relevant
performance indicators will also be reported.

A detailed timeline for the competition:

● Site opens 14th Feb 2023

● Registration starts 14th Feb 2023

● Test dataset available 28th Feb 2023

● Registration closes 10th May 2023

● Algorithm submission deadline 10th May 2023

● Results and report announcement 15th May 2023

Relevant publications

● M. Vitek, A.Das et al., “Exploring Bias in Sclera Segmentation Models:
A Group Evaluation Approach,” in IEEE Transactions on Information
Forensics and Security, vol. 18, pp. 190-205, 2023, doi:
10.1109/TIFS.2022.3216468.

● V. Matej, A. Das et al. , SSBC 2020: Sclera Segmentation Benchmarking
Competition in the Mobile Environment, IJCB 2020.

● A. Das, U Pal, M. Blumenstein, C. Wang, Y. He, Y. Zhu, Z. Sun, Sclera
Segmentation Benchmarking Competition in Cross-resolution Environment,
ICB 2019.

Design by 2b Consult