Free Webinar on “Remote Photoplethysmography Based 3D Facial Mask Presentation Attack Detection” by Prof. Pong Chi Yuen

*** FREE ONLINE WEBINAR on Heterogeneous Face Recognition ***

The IEEE Biometrics Council invites participants to the upcoming (free)
webinar by Prof. Pong Chi Yuen on “Remote Photoplethysmography Based 3D
Facial Mask Presentation Attack Detection”. Detail on the webinar are
given below:

Title: Remote Photoplethysmography Based 3D Facial Mask Presentation
Attack Detection
Speaker: Prof. Pong Chi Yuen, Hong Kong Baptist University, Hong Kong
When: May 24, at 10am Hong Kong time (4 am CEST, i.e., May 23 at 9 pm
CDT, 10pm EDT)
Where: Online (Zoom)
Registration: (free, but required):
https://us06web.zoom.us/webinar/register/WN_CMxMWw18QtKgo9zqJFujHQ

For details on the webinar see:
https://ieee-biometrics.org/index.php/activities/webinars

*****
Talk Summary: While face recognition technology has been extensive
deployed in many practical applications, research problems related to
face presentation attack detection (PAD) are still unsolved. The popular
face presentation attacks include images, videos and 3D masks. Among
these, 3D mask presentation attacks are the most challenging because
super realistic high-quality 3D facial masks can be created at a
reasonable cost. One of the promising approaches in addressing 3D mask
presentation attacks is remote photoplethysmography (rPPG). In this
talk, I will share my research journey on face presentation attack
detection from 2016 to present. The talk will first discuss the basic
principle of using rPPG technology for face PAD, and discuss our
proposed rPPG-based face PAD methods. The reliability of rPPG based face
PAD methods relies on the quality of the estimated rPPG signals. I will
also discuss the robustness of the rPPG estimation algorithms. Some
comments and suggestions on future research will also be given at the
end of the talk.

About the speaker: Prof. Pong Chi Yuen is a Chair Professor in Computer
Science and Associate Dean at Hong Kong Baptist University. He received
his Ph.D. degree in Electrical and Electronic Engineering in 1993 from
The University of Hong Kong. Dr. Yuen has been involved in various
conferences and served as the director of Croucher ASI on biometric
authentication and biometric security and privacy. He has received the
Outstanding Editorial Board Service Award in 2018 and served as Vice
President of the IEEE Biometrics Council. Dr. Yuen has received the
first and second prize Natural Science Awards from Guangdong Province
and the Ministry of Education, China, respectively. He is a Fellow of
IAPR and currently serves as Senior Area Editor of IEEE Transactions on
Information Forensics and Security and Associate Editor of IEEE
Transactions on Biometrics, Behaviour, and Identity Science.

Call for papers – 4th International Conference … was uploaded by International Journal on Natural Language Computing (IJNLC)

Academia.edu

International Journal on Natural Language Computing (IJNLC) just uploaded “Call for papers – 4th International Conference on NLP Trends & Technologies (NLPTT 2023).”

Call for papers – 4th International Conference on NLP Trends & Technologies (NLPTT 2023)
Paper Thumbnail
Author Photo International Journal on Natural Language…
2023, NLPTT
44 Views 
View Paper ▸   Download PDF ⬇
ABSTRACT
4th International Conference on NLP Trends & Technologies (NLPTT 2023)will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Natural Language Computing technologies and its applications.
read more…

 

                                                           

Call for participants: 2023 Unconstrained Ear Recognition Challenge (@IEEE IJCB 2023)

**********************************
The Unconstrained Ear Recognition Challenge 2023 (UERC)

Held in conjunction with IEEE IJCB 2023
https://ijcb2023.ieee-biometrics.org/

Important dates: Registration is open now
UERC 2023 Website: http://awe.fri.uni-lj.si/uerc.html
***********************************

Ear recognition is an active area of research within the biometric
community. However, the work in this field has long focused on maximizing
raw recognition performance, while other aspects critical for the
deployment of biometrics recognition techniques in practice have largely
been ignored. One such example is demographic bias. Modern ear
recognition approaches are not only expected to be highly efficient when
recognizing individuals, but also to be equally fair in their decisions,
regardless of the demographic characteristics of the subjects, e.g.,
gender or ethnicity. The 2023 Unconstrained Ear Recognition Challenge
will, therefore, investigate the performance as well as demographic bias
of existing ear recognition solutions and promote research into
bias-mitigation mechanisms that have minimal impact on the recognition
performance.

Understanding demographic bias is important because it can help to
identify and mitigate inaccuracies and errors in biometric systems,
prevent the development of discriminatory systems, and even inform the
public policies and regulations related to biometric systems. Research
related to demographic bias in ear recognition techniques can help
promote the development of more accurate, fair, and just ear recognition
systems that are less likely to produce errors or false positives for
certain groups of people and protect individual's rights and interests.

To promote research in the bias-aware ear recognition, the Unconstrained
Ear Recognition Challenge (UERC) 2023 will bring together researchers
working in the field of ear recognition and benchmark existing and new
algorithms on a common dataset and under a predefined experimental protocol.

*** How to Participate? ***
To participate in UERC 2023 fill out the following registration form:
https://docs.google.com/forms/d/e/1FAIpQLSdrXKDcor2MRsvOsdAkjptcCJSZ40NbLHo4Tc5GjP7-pzRGMQ/viewform

*** Execution ***
UERC 2023 will be organized as a two-track competition, where each track
will be focused on one specific goal. Participants will be free to enter
only a single track or compete in both. For each track, a dataset,
evaluation tool written in Python, and baseline models in Python will be
made available. A detailed description of the two tracks is given below:

Track 1: Fair Ear Recognition. The first UERC 2023 evaluation track will
collect ear recognition models and score their performance on ear images
captured in unconstrained environments. Here, the performance indicators
will include both, a measure of recognition performance as well as an
estimate of the exhibited demographic bias. Both recognition and bias
scores will contribute to the overall ranking. Participants will be free
to develop any type of model to maximize performance, while minimizing
bias. The final submission for this track will include a working
solution (source code or compiled binary), which the organizers will run
to evaluate the performance on a sequestered test data.

Track 2: Bias Mitigation. The second UERC 2023 track will address bias
mitigation strategies explicitly. Here, a baseline ResNet model (written
in Python) will be made available to the participants and the goal will
be to design bias mitigation schemes that reduce the initial bias of the
models without adversely affecting performance. Such schemes may include
additional model blocks and network components, normalization layers,
knowledge infusion mechanisms, score normalization procedures, image
preprocessing approaches and any other solution capable of reducing bias
of the predefined base model. Similarly to the first track, participants
will have to submit a working solution that the organizers will evaluate
on the sequestered test data.

*** Summary paper and co-authorship ***
The results of UERC 2023 will be published in the IJCB conference paper
authored jointly by all participants of the challenge.

*** Organizers ***
+ Asst. Prof. Žiga Emeršič, University of Ljubljana, Faculty of Computer
and Information Science, Slovenia
+ Prof. Hazım Kemal Ekenel, Istanbul Technical University, Department of
Computer Engineering, Turkey
+ Prof. Guillermo Camara-Chavez, Federal University of Ouro Preto, Brazil
+ Prof. Peter Peer, University of Ljubljana, Faculty of Computer and
Information Science, Slovenia
+ Prof. Vitomir Štruc, University of Ljubljana, Faculty of Electrical
Engineering, Slovenia, EU

*** Timeline ***
+ February 14th: Promotion of the competition, website draft,
registration opens.
+ February 15th: Kick-off of the competition: data, toolkit and
instructions made available on UERC  website.
+ April 15th: Possible interim ranking.
+ May 1st: Registration closes, end of the competition.
+ May 15th: Summary paper submission.

Call for Participants: IJCB 2023 Competition: 8th Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023)

********************************************************
8th Sclera Segmentation and Recognition  Benchmarking Competition (SSRBC
2023)

Held in conjunction with IEEE/IAPR IJCB 2023
https://ijcb2023.ieee-biometrics.org/

Important dates: Registration is already open
SSRBC 2023 Website:
https://sites.google.com/hyderabad.bits-pilani.ac.in/ssrbc2023/home
********************************************************

Sclera biometrics have gained significant popularity among emerging
ocular traits in the last few years. In order to evaluate the potential
of this trait, a considerable amount of research has been presented in
the literature, both employing the sclera individually and in
combination with the iris. In spite of those initiatives, sclera
biometrics need to be studied more extensively to ascertain their
usefulness. Moreover, the sclera segmentation task still requires a
significant amount of attention due to challenges associated with the
performance of existing techniques while sclera recognition is performed
in cross-sensor and resolution scenarios. In order to investigate these
challenges, document recent development and attract the
attention/interest of researchers we are planning to host the next
Sclera Segmentation and Recognition Benchmarking Competition SSRBC 2023.
SSRBC 2023 will be the 7 th in the series of sclera (segmentation and
recognition) benchmarking competitions following SSBC 2015, SSRBC 2016,
SSERBC 2017, SSBC 2018, SSBC 2019 and SSBC 2020 held in conjunction with
BTAS 2015, ICB 2016, IJCB 2017, ICB 2018, 19 and 20, respectively. Due
to the overwhelming success of SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC
2018, 2019 and IJCB 2020, we plan to organize this proposed competition
to benchmark sclera segmentation and recognition jointly with both
cross-sensor and low and high-resolution images.

How to participate?

Registration for the competition can be done by email. If you would like
to register and receive the training dataset, please send an email to
abhijit.das@hyderabad.bits-pilani.ac.in with the subject line as “SSRBC
2023 registration” with the following information:

Name, Affiliation, Email, Phone number, CV , Mailing Address and signed
version of the following form .

Organizers :

Dr. Abhijit Das, BITS Pilani, Hyderabad, India
(abhijit.das@hyderabad.bits-pilani.ac.in)

Dr. Aritra Mukherjee, BITS Pilani, , Hyderabad, India
(a.mukherjee@hyderabad.bits-pilani.ac.in)

Prof. Umapada Pal,  Indian Statistical Institute, Kolkata, India
(umapada@isical.ac.in )

Prof. Peter Peer, University of Ljubljana, Ljubljana, Slovenija
(peter.peer @fri.uni-lj.si)

Assoc. Prof. Vitomir Štruc , University of Ljubljana, Ljubljana,
Slovenija (vitomir.struc @fe.uni-lj.si)

Execution

Description of the dataset(s) used for the competition and the available
annotations

The competition aims to benchmark the sclera segmentation and
recognition tasks with a dataset containing both low and high-resolution
images. Three different datasets will be employed for the competition,
where two were acquired with a DSLR camera and one by a mobile camera.

The first dataset, i.e, the multi-angle sclera dataset (MASD), consists
of 2624 RGB images taken from 82 identities. Images were collected from
both the eyes of each individual, so there are 164 different eyes in
total in the dataset. For each individual image, four gaze directions
(looking straight, left, right and up) were captured and for each
direction 4 images were taken. The subjects from the database are both
male and female and with different eye colors, few of them are wearing
contact lenses and images were taken at different times of the day. The
database contains images with blinking eyes, closed eyes and blurred
eyes. High-resolution images stored in JPEG format are provided in the
database (7500 x 5000 dimensions). A NIKON D 800 camera and 28300 lenses
were used for image capturing. A ground truth or manual sclera
segmentation of this dataset is also available. For development
purposes, a subset of the database, both eye images and ground truth (1
image for each angle/gaze of the first 30 subjects, i.e. 120 images in
total) will be provided to the participants.

The second dataset, the Mobile sclera dataset (MSD), consists of 500 RGB
images from both eyes of 25 individuals (in other words 50 different
eyes). For each eye, 10 images were captured. The database contains
blurred images and images with blinking eyes. The individuals comprise
both males and females (12 males and 13 females), of different ages and
different skin colors, 2 of them were wearing contact lenses and the
images were taken at different times of the day. Variation in image
quality (blur, lighting condition etc.) and different acquisition
conditions was included intentionally in the database to investigate the
performance of the framework in non-ideal scenarios. High-resolution
images (3264 × 2448) of 96 dpi are included in the database. All the
images are in JPEG format. The images were captured using a mobile
camera with an 8-megapixel rear camera.

The third dataset, SBVPI, consists of 1858 RGB images of 110 eyes (i.e.,
55 subjects) captured with a DSLR camera (specifically, a Canon EOS 60D
with macro lenses). All images were manually cropped to extract the
desired ROI while maintaining their aspect ratio, then rescaled to 3000
× 1700 pixels to maintain a consistent image size across the entire
dataset. Images in the dataset were captured at the highest resolution
and quality settings available in the camera and in a laboratory
environment. The dataset contains images taken under 4 different gaze
directions, with a minimum of 4 images per direction for each subject.
The appearance variability in SBVPI is due to identity, eye color,
gender, and age. Manually generated markups of the sclera and periocular
regions are present for all images. SBVPI is publicly available for
research purposes.

Details on the experimental protocol and result generation/submission
procedure,

The competition will address two problems of relevance to IJCB 2023,
sclera segmentation and recognition, and will be organized around three
tasks:

● Segmentation task: for the segmentation task, participants will have
to learn segmentation models on the MASD datasets and then test them on
the MSD and SBVPI datasets. Complete algorithms will have to be
submitted for scoring. The final performance evaluation will be
conducted by the organizers.

● Recognition task: for the recognition task, the participants will be
asked to develop recognition models on the MASD datasets and then submit
the trained models for scoring to the organizers. The performance
evaluation will be conducted on the sequestered MSD and SBVPI dataset.
In this case, the manually generated (ground truth) segmentation mask
will be used to get the ROI before subjecting the images to the
recognition/feature extraction models..

● Joint segmentation and Recognition task: for the joint
segmentation-recognition task, the participants will be asked to develop
segmentation as well as recognition models on the MASD datasets and then
submit the trained models for scoring to the organizers. The performance
evaluation will be conducted on the sequestered MSD and SBVPI dataset.
In this case, the segmentation masks generated by the models of the
participants will be used to extract the ROI. To ensure the models are
only trained on the vasculature of the sclera, the segmentation masks
generated by the segmentation models will be used to remove all parts of
the images that do not belong to the sclera prior to subjecting images
to the recognition model/feature extractor.

Description of the evaluation criteria (performance metrics) and
available baseline implementations/code (e.g., a starter kit).

● Segmentation task: The evaluation measures will be precision and
recall (recall will consider the prior measure for ranking the
algorithms). The ground truth of the manually segmented sclera region in
an eye image is constructed, which will be used as a baseline.

● Recognition task: For the recognition task, we will consider
verification experiments and report the Area Under the ROC Curve (AUC)
as our main competition metric. For the summary paper, other relevant
performance indicators will also be reported.

A detailed timeline for the competition:

● Site opens 14th Feb 2023

● Registration starts 14th Feb 2023

● Test dataset available 28th Feb 2023

● Registration closes 10th May 2023

● Algorithm submission deadline 10th May 2023

● Results and report announcement 15th May 2023

Relevant publications

● M. Vitek, A.Das et al., “Exploring Bias in Sclera Segmentation Models:
A Group Evaluation Approach,” in IEEE Transactions on Information
Forensics and Security, vol. 18, pp. 190-205, 2023, doi:
10.1109/TIFS.2022.3216468.

● V. Matej, A. Das et al. , SSBC 2020: Sclera Segmentation Benchmarking
Competition in the Mobile Environment, IJCB 2020.

● A. Das, U Pal, M. Blumenstein, C. Wang, Y. He, Y. Zhu, Z. Sun, Sclera
Segmentation Benchmarking Competition in Cross-resolution Environment,
ICB 2019.

Live e-Lecture by Prof. Martin Cooke: “Who needs big data? Listeners’ adaptation to extreme forms of variability in speech”, 4 May 2023, 15:00 CET

Dear AI scientist/engineer/student/enthusiast,

 

Prof. Martin Cooke, a prominent AI researcher internationally, will deliver the e-lecture: 

“Who needs big data? Listeners’ adaptation to extreme forms of variability in speech”, on May 4th, 2023 15:00 CET,

see details in: http://www.i-aida.org/ai-lectures/

 

LINK and more Info: https://www.hitz.eus/en/webinars

Attendance is free.

 

The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media, ELISE, Humane AI Net, TAILOR, VISION, currently in the process of formation,

is very pleased to offer you top quality scientific lectures on several current hot AI topics.

 

Lectures will be offered alternatingly by:

Top highly-cited senior AI scientists internationally or

Young AI scientists with promise of excellence (AI sprint lectures)

 

These lectures are disseminated through multiple channels and email lists (we apologize if you received it through various channels). 

If you want to stay informed on future lectures, you can register in the email lists AIDA email list and CVML email list.

 

Best regards

Profs. N. Sebe, M. Chetouani, P. Flach, B. O’Sullivan, I. Pitas, , J. Stefanowski

 

 

 

Post scriptum: To stay current on CVMl matters, you may want to register to the CVML email list, following instructions in https://lists.auth.gr/sympa/info/cvml

Design by 2b Consult