2nd CfP: 2nd Benchmark for Autonomous Robot Navigation (BARN) Challenge — ICRA 2023 Competition

IROS deadline is behind us, so it’s time to start your submission to the 2nd BARN Challenge at ICRA 2023! 
 
 
Lessons Learned from The BARN Challenge 2022 Last Year: https://cs.gmu.edu/~xiao/papers/barn22_report.pdf 
 
Dear roboticists,
 
are you interested in agile robot navigation in highly constrained spaces with a lot of obstacles around, e.g., cluttered households or after-disaster scenarios? Do you think mobile robot navigation is mostly a solved problem? Are you looking for a hands-on project for your robotics class, but may not have (sufficient) robot platforms for your students?
 
If your answer is yes to any of the above questions, we sincerely invite you to participate in our (2nd) ICRA 2023 BARN Challenge (https://cs.gmu.edu/~xiao/Research/BARN_Challenge/BARN_Challenge23.html)! The BARN Challenge aims at evaluating state-of-the-art autonomous navigation systems to move robots through highly constrained environments in a safe and efficient manner. The task is to navigate a standardized Clearpath Jackal robot from a predefined start to a goal location as quickly as possible without any collision. The challenge will take place both in the simulated BARN dataset and in physical obstacle courses at ICRA2023.
 
1. The competition task is designing ground navigation systems to navigate through all 300 BARN environments (https://cs.gmu.edu/~xiao/Research/BARN/BARN.html) and physical obstacle courses constructed at ICRA2023 as fast as possible without collision.
 
2. The 300 BARN environments can be the training set for learning-based methods, or to design classical approaches in. During the simulation competition, we will generate another 50 unseen environments unavailable to the participants before the competition.
 
3. We will standardize a Jackal robot in the Gazebo simulation, including a Hokuyo 2D LiDAR, motor controller of 2m/s max speed, etc.
 
4. Participants can use any approaches to tackle the navigation problem,  such as using classical sampling-based or optimization-based planners, end-to-end learning, or hybrid approaches. We will provide baselines for reference. 
 
5. A standardized scoring system is provided on the website.
 
6. We will invite the top teams in simulation to compete in the real world. The team who achieves the fastest collision-free navigation in the physical obstacle courses wins.
 
If you are interested in participating, please submit your navigation system at https://docs.google.com/forms/d/e/1FAIpQLScYKxIZ2HYSDMLx3BxlYkxugmpy1OrrewYk_MSlDOv2hei7LQ/viewform?usp=sf_link
 
Co-Organizers:
Xuesu Xiao (George Mason University / Everyday Robots)
Zifan Xu (UT Austin)
Garrett Warnell (US Army Research Lab / UT Austin)
Peter Stone (UT Austin / Sony AI)
 
Sponsor:
Clearpath Robotics, https://clearpathrobotics.com/
 
Thanks
Xuesu

Call for Participants: IJCB 2023 Competition: 8th Sclera Segmentation and Recognition Benchmarking Competition (SSRBC 2023)

8th Sclera Segmentation and Recognition  Benchmarking Competition (SSRBC 2023)

Held in conjunction with IEEE/IAPR IJCB 2023
https://ijcb2023.ieee-biometrics.org/

Important dates: Registration is already open
SSRBC 2023 Website: https://sites.google.com/hyderabad.bits-pilani.ac.in/ssrbc2023/home
********************************************************
Sclera biometrics have gained significant popularity among emerging ocular traits in the last few years. In order to evaluate the potential of this trait, a considerable amount of research has been presented in the literature, both employing the sclera individually and in combination with the iris. In spite of those initiatives, sclera biometrics need to be studied more extensively to ascertain their usefulness. Moreover, the sclera segmentation task still requires a significant amount of attention due to challenges associated with the performance of existing techniques while sclera recognition is performed in cross-sensor and resolution scenarios. In order to investigate these challenges, document recent development and attract the attention/interest of researchers we are planning to host the next Sclera Segmentation and Recognition Benchmarking Competition SSRBC 2023. SSRBC 2023 will be the 7 th in the series of sclera (segmentation and recognition) benchmarking competitions following SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC 2018, SSBC 2019 and SSBC 2020 held in conjunction with BTAS 2015, ICB 2016, IJCB 2017, ICB 2018, 19 and 20, respectively. Due to the overwhelming success of SSBC 2015, SSRBC 2016, SSERBC 2017, SSBC 2018, 2019 and IJCB 2020, we plan to organize this proposed competition to benchmark sclera segmentation and recognition jointly with both cross-sensor and low and high-resolution images.

How to participate?

Registration for the competition can be done by email. If you would like to register and receive the training dataset, please send an email to abhijit.das@hyderabad.bits-pilani.ac.in with the subject line as “SSRBC 2023 registration” with the following information:

Name, Affiliation, Email, Phone number, CV , Mailing Address and signed version of the following form .

Organizers :

Dr. Abhijit Das, BITS Pilani, Hyderabad, India  (abhijit.das@hyderabad.bits-pilani.ac.in)

Dr. Aritra Mukherjee, BITS Pilani, , Hyderabad, India  (a.mukherjee@hyderabad.bits-pilani.ac.in)

Prof. Umapada Pal,  Indian Statistical Institute, Kolkata, India (umapada@isical.ac.in )

Prof. Peter Peer, University of Ljubljana, Ljubljana, Slovenija (peter.peer @fri.uni-lj.si)

Assoc. Prof. Vitomir Štruc , University of Ljubljana, Ljubljana, Slovenija (vitomir.struc @fe.uni-lj.si)

Execution

Description of the dataset(s) used for the competition and the available annotations

The competition aims to benchmark the sclera segmentation and recognition tasks with a dataset containing both low and high-resolution images. Three different datasets will be employed for the competition, where two were acquired with a DSLR camera and one by a mobile camera.

The first dataset, i.e, the multi-angle sclera dataset (MASD), consists of 2624 RGB images taken from 82 identities. Images were collected from both the eyes of each individual, so there are 164 different eyes in total in the dataset. For each individual image, four gaze directions (looking straight, left, right and up) were captured and for each direction 4 images were taken. The subjects from the database are both male and female and with different eye colors, few of them are wearing contact lenses and images were taken at different times of the day. The database contains images with blinking eyes, closed eyes and blurred eyes. High-resolution images stored in JPEG format are provided in the database (7500 x 5000 dimensions). A NIKON D 800 camera and 28300 lenses were used for image capturing. A ground truth or manual sclera segmentation of this dataset is also available. For development purposes, a subset of the database, both eye images and ground truth (1 image for each angle/gaze of the first 30 subjects, i.e. 120 images in total) will be provided to the participants.

The second dataset, the Mobile sclera dataset (MSD), consists of 500 RGB images from both eyes of 25 individuals (in other words 50 different eyes). For each eye, 10 images were captured. The database contains blurred images and images with blinking eyes. The individuals comprise both males and females (12 males and 13 females), of different ages and different skin colors, 2 of them were wearing contact lenses and the images were taken at different times of the day. Variation in image quality (blur, lighting condition etc.) and different acquisition conditions was included intentionally in the database to investigate the performance of the framework in non-ideal scenarios. High-resolution images (3264 × 2448) of 96 dpi are included in the database. All the images are in JPEG format. The images were captured using a mobile camera with an 8-megapixel rear camera.

The third dataset, SBVPI, consists of 1858 RGB images of 110 eyes (i.e., 55 subjects) captured with a DSLR camera (specifically, a Canon EOS 60D with macro lenses). All images were manually cropped to extract the desired ROI while maintaining their aspect ratio, then rescaled to 3000 × 1700 pixels to maintain a consistent image size across the entire dataset. Images in the dataset were captured at the highest resolution and quality settings available in the camera and in a laboratory environment. The dataset contains images taken under 4 different gaze directions, with a minimum of 4 images per direction for each subject. The appearance variability in SBVPI is due to identity, eye color, gender, and age. Manually generated markups of the sclera and periocular regions are present for all images. SBVPI is publicly available for research purposes.

Details on the experimental protocol and result generation/submission procedure,

The competition will address two problems of relevance to IJCB 2023, sclera segmentation and recognition, and will be organized around three tasks:

● Segmentation task: for the segmentation task, participants will have to learn segmentation models on the MASD datasets and then test them on the MSD and SBVPI datasets. Complete algorithms will have to be submitted for scoring. The final performance evaluation will be conducted by the organizers.

● Recognition task: for the recognition task, the participants will be asked to develop recognition models on the MASD datasets and then submit the trained models for scoring to the organizers. The performance evaluation will be conducted on the sequestered MSD and SBVPI dataset. In this case, the manually generated (ground truth) segmentation mask will be used to get the ROI before subjecting the images to the recognition/feature extraction models..

● Joint segmentation and Recognition task: for the joint segmentation-recognition task, the participants will be asked to develop segmentation as well as recognition models on the MASD datasets and then submit the trained models for scoring to the organizers. The performance evaluation will be conducted on the sequestered MSD and SBVPI dataset. In this case, the segmentation masks generated by the models of the participants will be used to extract the ROI. To ensure the models are only trained on the vasculature of the sclera, the segmentation masks generated by the segmentation models will be used to remove all parts of the images that do not belong to the sclera prior to subjecting images to the recognition model/feature extractor.

Description of the evaluation criteria (performance metrics) and available baseline implementations/code (e.g., a starter kit).

● Segmentation task: The evaluation measures will be precision and recall (recall will consider the prior measure for ranking the algorithms). The ground truth of the manually segmented sclera region in an eye image is constructed, which will be used as a baseline.

● Recognition task: For the recognition task, we will consider verification experiments and report the Area Under the ROC Curve (AUC) as our main competition metric. For the summary paper, other relevant performance indicators will also be reported.

A detailed timeline for the competition:

● Site opens 14th Feb 2023

● Registration starts 14th Feb 2023

● Test dataset available 28th Feb 2023

● Registration closes 10th May 2023

● Algorithm submission deadline 10th May 2023

● Results and report announcement 15th May 2023

Relevant publications

● M. Vitek, A.Das et al., “Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach,” in IEEE Transactions on Information Forensics and Security, vol. 18, pp. 190-205, 2023, doi: 10.1109/TIFS.2022.3216468.

● V. Matej, A. Das et al. , SSBC 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment, IJCB 2020.

● A. Das, U Pal, M. Blumenstein, C. Wang, Y. He, Y. Zhu, Z. Sun, Sclera Segmentation Benchmarking Competition in Cross-resolution Environment, ICB 2019.


Best regards
Abhijit

The Open Deep Learning Toolkit for Robotics Version 2.1 is already available !!

The Open Deep Learning Toolkit for Robotics (OpenDR) version 2.1 is 

already available only just two months after the toolkit version 2.0 
release!
This new version of the toolkit includes the following updates:

New Features:

– Added Efficient LiDAR Panoptic Segmentation
– Added Nanodet 2D Object Detection tool
– Added C API implementations of NanoDet 2D Object Detection tool
– Added C API implementations of forward pass of DETR 2D Object Detection tool
– Added C API implementations of forward pass of DeepSORT 2D Object 
Tracking tool
– Added C API implementations of forward pass of Lightweight OpenPose, 
Pose Estimator tool
– Added C API implementations of forward pass of X3D 2D Activity 
Recognition tool
– Added C API implementations of forward pass of Progressive 
Spatiotemporal GCN Skeleton-based Action Recognition tool
– Added Binary High Resolution Analysis tool
– Added Multi-Object-Search tool

Enhancements:

– Added support in C API for detection target structure and vector of 
detections
– Added support in C API for tensor structure and vector of tensors
– Added support in C API for json parser

You can download the toolkit here:

– GitHub: https://github.com/opendr-eu/opendr
– pip: https://pypi.org/project/opendr-toolkit/
– Docker Hub: https://hub.docker.com/r/opendr/opendr-toolkit/tags

We look forward to receiving your feedback, bug reports, and 
suggestions for improvements at https://github.com/opendr-eu/opendr !

For more information about the project, you can visit OpenDR website: 
https://opendr.eu/

OpenDR Team

This project has received funding from the Horizon 2020 programme 
under grant agreement No 871449.

GeoLifeCLEF23 competition, until May 17th

We would like to invite you to participate in GeoLifeCLEF 2023, a machine learning competition that aims at predicting plant species composition in space and time. The competition is designed to support biodiversity management and conservation and to improve species identification and inventory tools.
The primary objective is to predict the set of plant species present at a given location and time using Sentinel-2 satellite images and Landsat time-series, as well as other rasterized environmental data like land-cover, human footprint, bioclimatic, and soil variables. The challenge presents several difficulties, including multi-label learning from single positive labels, strong class imbalance, multi-modal learning, and large-scale data.
We provide a large-scale training set of approximately 5 million plant occurrences in Europe (single-label data) belonging to 10,000 different species, thus covering a large proportion of the European flora. Unlike in previous years, in which occurrence data was also used for evaluation, we provide presence-absence data, thus multi-label, for 5,000 validation and 20,000 test plots.
The competition will be hosted by Kaggle and a summary of the results will be presented at the FGVC workshop at CVPR 2023 in Vancouver and at ImageCLEF 2023 in Thessaloniki.
Give it a try! More details can be found in:
Best regards,
The GeoLifeCLEF23 organizers

Expert Talk on Multimodal Transformers with Hubert Ramsauer

Dear AI students and enthusiasts,
Welcome to our second expert talk hosted by the neuron.ai AI Brainery team! We are excited to bring together leading experts in the field of Artificial Intelligence to share their knowledge and insights with our community.

Our expert speakers will be discussing a wide range of topics within AI, from the latest research and development to practical applications and their impact on society.

We are delighted to welcome Hubert Ramsauer from Kaleido AI as the expert on stage on March 15th. He will talk about Large Language Models (LLMs), especially in the context of multimodality, on a theoretical level and provide some practical insights.

We invite you to join us for this exciting event and participate in the conversation on the future of AI. This event will be held entirely online so everyone can join.

📅 Wednesday, 15 March 2023
🕛 18:00 CET – open end
📍 Zoom (link will be shared after registration)
Don’t miss your chance to hear from industry leaders, ask questions, and network with like-minded individuals. Register now to secure your spot at neuron.ai’s first expert talk on Artificial Intelligence.
Best regards,
Queby

Nathanya Queby Satriani
Marketing & Design Department @ neuron.ai
The first student-run initiative for AI in Austria.

https://www.linkedin.com/company/neuron-ai-austria/ https://www.instagram.com/neuron.ai_austria/ 

Design by 2b Consult