– el proyecto de ley de electromovilidad,
– la producción nacional de baterías de litio,
– la producción nacional de vehículos eléctricos,
– el papel de las universidades y de los consejos profesionales,
– los desafíos implicados en materia regulatoria.**
ISPACS2022 | 22 – 25 November 2022 | Scopus and EI indexed @Penang, Malaysia
July 28th, 2022
Daniela Lopez de Luise The International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS2022)
22-25 November 2022 @ Penang, Malaysia.
https://www.ispacs2022.org/index.html
ISPACS2022 aims to provide an international forum for researchers and practitioners to share their knowledge and experience in intelligent signal processing and communication systems.
All accepted papers will be submitted for inclusion to IEEE Xplore digital library covered by Scopus and EI indexing.
Papers for ISPACS 2022 are solicited in, but not limited to, the following areas:
|
· Information Theory · Wideband and Massive MIMO · Audio/Speech Processing · Image/Video Processing · Visible Light Communication · Internet of Things · Software Defined Networks · Digital Signal Processors |
· Sensors and Devices · Intelligent Instrumentations · Smart Devices · Wearable Electronics · Video and Multimedia Technology · Wireless Systems · Cloud Computing · Machine Learning |
There are 9 special sessions to be held as below:
· Advanced Topics on Multimedia Security and Applications
· Applications of Machine Learning in Simulations in Engineering
· Digital Pathology (Histopathological Image Analysis)
· Innovative Applications for Artificial Intelligence and Data Science
· Intelligent Fault Diagnostics Techniques Based on Visual Perception
· Representation Learning and Pattern Recognition in Computer Vision
· Signal Processing and Wireless Communication Technologies in 5G and 6G
· Smart Electronics and Smart Systems
· Visual Attributes for Smart Applications (VASA)
Important Dates:
· Special session proposals submission : 15 June, 2022
· Paper submission : 1 August, 2022
· Notification of acceptance : 1 September, 2022
· Submission of camera-ready full papers : 20 September, 2022
· Early bird registration deadline : 1 October, 2022
· Conference : 22-25 November, 2022
Submission Link: https://easychair.org/account/signin
Please find the CFP attached.
Thank You & Best Wishes
ISPACS 2022 Committee
CfP SIVA’23 workshop on Socially Interactive Human-like Virtual Agents
July 28th, 2022
Daniela Lopez de Luise Submission: https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/
OVERVIEW
Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.
SCOPE
Topics of interest include but are not limited to:
+ Analysis of Multimodal Human-like Behavior
– Analyzing and understanding of human multimodal behavior (speech, gesture, face)
– Creating datasets for the study and modeling of human multimodal behavior
– Coordination and synchronization of human multimodal behavior
– Analysis of style and expressivity in human multimodal behavior
– Cultural variability of social multimodal behavior
+ Modeling and Generation of Multimodal Human-like Behavior
– Multimodal generation of human-like behavior (speech, gesture, face)
– Face and gesture generation driven by text and speech
– Context-aware generation of multimodal human-like behavior
– Modeling of style and expressivity for the generation of multimodal behavior
– Modeling paralinguistic cues for multimodal behavior generation
– Few-shots or zero-shot transfer of style and expressivity
– Slightly-supervised adaptation of multimodal behavior to context
+ Psychology and Cognition of of Multimodal Human-like Behavior
– Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
– Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
– Neuroscience and social cognition of real humans using virtual agents and physical robots
IMPORTANT DATES
Submission Deadline September, 12 2022
Notification of Acceptance: October, 15 2022
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023
VENUE
The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.
ADDITIONAL INFORMATION AND SUBMISSION DETAILS
Submissions must be original and not published or submitted elsewhere. Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website. All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors. Authors should submit their articles in a single pdf file in the submission website – no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.
DIVERSITY, EQUALITY, AND INCLUSION
The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.
ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS – ISIR, Sorbonne Université
Face Recognition Under Turbulence and Environment Impacted Drone Surveillance (FRDrone 2023 Workshop) || IEEE F&G 2023
July 28th, 2022
Daniela Lopez de Luise Face recognition has received a lot of attention due to surveillance needed across various security platforms ranging from border control to e-payments to secure office access. However, in the large crowd gathering in festivals or gaming events, identification of any possible suspects involving any avoidable mishappenings highly depends on facial information. The information might not be effectively captured using the traditional surveillance cameras due to their significant distance from the gathering event locations. For that, the use of drone sensors is an ideal solution. However, the acquired images generally suffer from poor quality. One probable reason for that is the effect of environmental factors such as turbulence. In this workshop coupled with the challenge session (accepted with the same title), we want to make the first step towards unconstrained surveillance using face recognition. The challenge and session will not only help the development of novel algorithms needed to improve face recognition performance but also disseminate the knowledge to the audience towards the possible future directions for real-world face recognition systems.
Face recognition in drone-shot videos has applicability in scenarios such as identifying individuals stuck at remote locations or in crowded places monitored via a drone. Recently, IARPA’s Biometric Recognition and Identification at Altitude and Range (BRIAR) program [1] also emphasizes the challenging problem of identifying individuals from long-range at elevated platforms. In brief, the goal of the workshop is to advance face recognition in challenging drone surveillance settings. The scope of the workshop includes but is not limited to the following topics:
- Low-Resolution face recognition
- Drone face super-resolution
- Face detection in low-resolution drone images
- Understanding the turbulence and environmental impact of drone face recognition
- Vision transformer for drone face recognition
- Neural architecture search for low-resolution face recognition
- Active Domain Generalization for drone face recognition
- AutoML
- Semi-supervised learning for drone face recognition
Important Dates
- Workshop paper submission deadline: September 20, 2022
- Notification to authors: October 20, 2022
- Camera-ready deadline: October 30, 2022
Submission details will be released soon. Each paper needs to follow the official guidelines including the submission template (http://fg2023.ieee-biometrics.org/participate/submission)
Website: http://iab-rubric.org/FG2023_workshop/workshop.html
IET – CVI Special Issue – Spectral Imaging Powered Computer Vision | Call for Papers | December 31, 2022
July 28th, 2022
Daniela Lopez de Luise IET Computer Vision
Special Issue on:
Spectral Imaging Powered Computer Vision
AIMS AND SCOPE
Recent advances in spectral imaging technology make it more convenient and affordable to capture data within and beyond the visual spectrum. They enable computers and AI agents to better observe, understand and interact with the world. Efforts in this area also lead to the construction of new datasets in different modalities such as infrared, ultraviolet, fluorescent, multispectral, and hyperspectral, bringing new opportunities to computer vision research and applications.
Extensive research has been undertaken during the past few years to process, learn and use data captured by spectral imaging technology. Nevertheless, many challenges remain unsolved in computer vision, for example, low-quality image, sparse input, the high dimensionality of data, high cost of data labelling, and lack of methods to analyse and use data in the context of their unique properties. In many mid-level and high-level computer vision tasks, such as object segmentation, detection and recognition, image retrieval and classification, video tracking and understanding, methods that can effectively explore the advantages of spectral information are yet to be developed. Moreover, effective data fusion in different modalities to develop a robust vision system is still an open problem. New computer vision methods and applications are urgently needed to advance this research area.
The goal of this special issue is to provide a forum for researchers, developers, and users in the broad artificial intelligence community to present their novel and original computer vision research powered by spectral imaging technology. Survey papers addressing relevant topics are also welcome.
Topics of interest include, but are not limited to:
Spectral imaging process
– Spectral image/video enhancement and reconstruction
– Object detection and recognition
– Image retrieval and classification
– Motion and tracking
– Visual Localisation and navigation
– 3D reconstruction
– Video analysis and understanding
– Representation learning, weakly-supervised learning, and contrastive learning of spectral data
– Domain adaption
– Multimodal learning, registration, and fusion
– Large-scale datasets and benchmarking
– Applications in biometrics, medicine, document processing, autonomous driving, and robotic vision
– New applications of spectral imaging
IMPORTANT DATES
Submission Deadline: December 31, 2022
Publication Date: August, 2023
– Jun Zhou
– Pedram Ghamisi
– Naoto Yokoya
– Fengchao Xiong
– Lei Tong




